URL Cleaner
- Other url mining tools:
- FQU Bot
- URL Query Parser
- and many more »
- The Problem
Sometimes collection curators and content developers use web scrapers (Wikipedia, 2018a) to extract URLs from websites and search result pages.If a web scraper is not available or the target search engine reacts against the scraping (Wikipedia, 2018b), URL extraction can still be possible by installing a browser add-on like Copy Selected Links or a similar plugin. Once installed, users can right-click selected text and copy the URL of any links it contains. To copy all links from a page, they just need to press
Ctrl + A
to select the entire page text, right-click the selected text, and copy all available URLs at once.Rule of thumb: Always use more than one browser. If a search engine-owned browser blocks an add-on, simply install it in a different browser. For instance, if almighty Google Chrome blocks the above add-on or interferes with its functionality, simply install it in Firefox and you are good to go. You may download similar add-ons for other browsers (Opera, Explorer,...).
Regardless of how URLs are collected (with or without web scrapers/add-ons), the end result might be a list of dirty, ugly records with obscure attribute-value pairs appended by search engines.
Sometimes the URLs involve:
- Social networks
These URLs are often viewed by collection curators as "plastic contamination" in search results suppose to be "organic". Typical examples are results from Google and similar search engines. - Self-promotions
The same search engine might include URLs pointing to unrequested content like its own products, services, partners/ads, links to additional content, etc. Typical examples are results from Google. - Special characters
These are URLs with characters defining queries (?), fragment identifiers (#), and hash-bangs (#!), among others (Wikipedia, 2018c; 2018d). - Encoded characters
These are URLs with %-encoded characters. - Special strings
These are URLs with mailto:, javascript:, or data: and that can pose adversarial issues. - Shorteners
These are URLs obfuscated by shortening services; e.g., bit.ly, goo.gl, is.gd, t.co, and many more. Regardless of their merits, shortened URLs can open the door to all sort of problems (Wikipedia, 2018e). These are frequently viewed by collection curators as unnecessary noise. - Images
These are URLs pointing to png, gif, jpg, jpeg, or svg files. In text collections, these URLs are essentially noise; unless you are building collections about images or their attributes.
- Social networks
- The Solution
We developed this tool, the Minerazzi URL Cleaner (MUC), as a solution for letting users generate a list of clean, sorted, and deduplicated URLs, with options for selectively including or excluding some of the above contaminants. - Unlike other URL cleaners, MUC cleans multiple URLs at once from search engines and websites, and can be used free of charge. Before proceeding any further, lets explain what MUC is and is not. The tool is a data cleaner and a lightweight version of our popular Editor and Curator tool. It is not a web scraper, URL validator, or URL shortener resolver, but can be used to clean results from these.
- In the next section, we describe some uses for MUC, its features and limitations.
- Searches Support
MUC was designed to edit search result URLs from the following.- Google, Bing, Yahoo, Yandex, and DuckDuckGo
- 100searchengines, HotBot, Ask, and textise.net
- Google Scholar
- Edits
The tool implements the following edit operations by default.- Social networks
URLs pointing to Linkedin, Facebook, Twitter, Myspace, Instagram, Pinterest, Snapchat, Youtube, Vimeo, Yelp, and Tumblr are removed. - Self-promotions
URLs about the supported search engines and pointing to their products, services, and partners/ads, or any additional content are removed. - Special characters
Sections of a URL that start with ? # [ ] @ ! $ & ' ( ) * , ; = are removed. Trailing forward slashes (/) are also removed. - Special strings
URLs with mailto:, javascript:, data: are removed. - Encoded characters
URL %-encoded characters are replaced by their unencoded versions. - Shorteners
URLs obfuscated by shortening services are removed (nearly 600 of these and counting). - Images
URLs pointing to images are removed. - One or more of the above edits can be disabled by checking the corresponding checkbox(es).
- Social networks
- OR Curation Layer
- As of 08-17-2018, the tool implements an extra curation layer listing URLs that match or do not match one or more term(s) or character(s) as defined by the user. For instance, use this filter if you want to include or exclude from the results URLs with the "https" string or a particular string.
- This is an OR filter which means that it will include or exclude results based on any of the strings defined by the user. Multiple strings should be delimited by spaces.
- Please experiment and familiarize yourself with this filter before using it as it can override some of the previous options.
- First time users
We recommend first time users install the Copy Selected Links, or a similar add-on, before proceeding any further. Then do a search in Google and, with the add-on installed, clean URLs, first selectively and then at full blast with MUC. - Tool limitations
Up to 5,000 URLs can be submitted at once. We arbitrarily imposed this limit to (a) provide fast responses, (b) minimize browser crashes, and (c) minimize abuses. - Last but not least, the tool might fail to remove non English, obfuscated, or encrypted characters.
- Data miners, web developers, marketers, or anyone interested in cleaning search results.
- Do a search in Google. Repeat the same search in Bing and Yandex. With the add-on mentioned in this page, collect URLs from these engines result pages. Submit these URLs to our tool. Compare results from these three search engines.
- Repeat previous exercise, but this time selectively including/excluding some of the URL contaminants discussed above. Compare results.
- Wikipedia (2018a). Web scraping.
- Wikipedia (2018b). Search engine scraping.
- Wikipedia (2018c). Query string.
- Wikipedia (2018d). Fragment identifier.
- Wikipedia (2018e). URL shortening.
Feedback
Contact us for any suggestion or question regarding this tool.