Weedmaps.com Data Scraping, Weedmaps.com Web Scraping, Weedmaps.com Website Scraping, Extract Weedmaps.com site, Weedmaps.com Database, Website Data Scraping, Email Scraping services, Weedmaps.com Scraping, Weedmaps.com Extraction, Weedmaps.com Data Extraction

Monday, 24 October 2016

What are the ethics of web scraping?

What are the ethics of web scraping?

Someone recently asked: "Is web scraping an ethical concept?" I believe that web scraping is absolutely an ethical concept. Web scraping (or screen scraping) is a mechanism to have a computer read a website. There is absolutely no technical difference between an automated computer viewing a website and a human-driven computer viewing a website. Furthermore, if done correctly, scraping can provide many benefits to all involved.

There are a bunch of great uses for web scraping. First, services like Instapaper, which allow saving content for reading on the go, use screen scraping to save a copy of the website to your phone. Second, services like Mint.com, an app which tells you where and how you are spending your money, uses screen scraping to access your bank's website (all with your permission). This is useful because banks do not provide many ways for programmers to access your financial data, even if you want them to. By getting access to your data, programmers can provide really interesting visualizations and insight into your spending habits, which can help you save money.

That said, web scraping can veer into unethical territory. This can take the form of reading websites much quicker than a human could, which can cause difficulty for the servers to handle it. This can cause degraded performance in the website. Malicious hackers use this tactic in what’s known as a "Denial of Service" attack.

Another aspect of unethical web scraping comes in what you do with that data. Some people will scrape the contents of a website and post it as their own, in effect stealing this content. This is a big no-no for the same reasons that taking someone else's book and putting your name on it is a bad idea. Intellectual property, copyright and trademark laws still apply on the internet and your legal recourse is much the same. People engaging in web scraping should make every effort to comply with the stated terms of service for a website. Even when in compliance with those terms, you should take special care in ensuring your activity doesn't affect other users of a website.

One of the downsides to screen scraping is it can be a brittle process. Minor changes to the backing website can often leave a scraper completely broken. Herein lies the mechanism for prevention: making changes to the structure of the code of your website can wreak havoc on a screen scraper's ability to extract information. Periodically making changes that are invisible to the user but affect the content of the code being returned is the most effective mechanism to thwart screen scrapers. That said, this is only a set-back. Authors of screen scrapers can always update them and, as there is no technical difference between a computer-backed browser and a human-backed browser, there's no way to 100% prevent access.

Going forward, I expect screen scraping to increase. One of the main reasons for screen scraping is that the underlying website doesn't have a way for programmers to get access to the data they want. As the number of programmers (and the need for programmers) increases over time, so too will the need for data sources. It is unreasonable to expect every company to dedicate the resources to build a programmer-friendly access point. Screen scraping puts the onus of data extraction on the programmer, not the company with the data, which can work out well for all involved.

Source: https://quickleft.com/blog/is-web-scraping-ethical/

Thursday, 13 October 2016

How to use Web Content Extractor(WCE) as Email Scraper?

How to use Web Content Extractor(WCE) as Email Scraper?

Web Content Extractor is a great web scraping software developed by Newprosoft Team. The software has easy to use project wizard to create a scraping configuration and scrape data from websites.

One day I came to see the Visual Email Extractor which is also product of Newprosoft and similar to Web Content Extractor but it’s primary use is to scrape email addresses by crawling websites you feed to the scraper. I had noticed that with the little modification in Web Content Extractor project configuration you can use it same as Visual Email Extractor to extract email addresses.

In this post I will show you what configuration makes the Web Content Extractor to extract email addresses. I still recommend Visual Email Extractor as it has lot more features then extracting email using WCE.

Here are the configuration that makes WCE to Extract Emails.

Step 1 : Open Web Content Extractor and Create New Project and Click on Next.

Step 2:  Under Crawling Rules -> Advanced Rules Tab do the following settings

Crawling Level 1 Settings

Follow Links if link text equals:
*contact*; *feedback*; *support*; *about*

for 'Follow Links if link text equals' text box enter following values:
contact; feedback; support; about

for 'Do not Follow links if URL contains' text box enter following values:

google.; yahoo.; bing; msn.; altavista.; myspace.com; youtube.com; googleusercontent.com; =http; .jpg; .gif; .png; .bmp; .exe; .zip; .pdf;

Set 'Maximum Crawling Deapth' to 2

set 'Crawling Order' to Deapth First Crawling

Tick mark below below check boxes:

->Follow all internal links

  Crawling Level 2  Settings

set 'Follow links if link text equals' to below value

*contact*; *feedback*; *support*; *about*

set 'Follow links if url contains' text box to below value

contact; feedback; support; about

set 'DO NOT follow links if url contains' text box to below value

=http

Step 3 After doing above settings now click on Next  -> in Extraction Pattern window -> Click on Define ->  in Web Page Address (URL) give any URL where email is given.  and click on  + sign right of Date Fields to define scraping pattern.

Now inside HTML Structure selects HTML check box or Body check box which means for each page it will take whole page content to parse data.

Now last settings to extract emails from page using regular expression based email extraction function.  Open Predefined Script window and select ‘Extract_Email_Addresses‘ and click on OK. and if you have used page that contains email then in Script Result’ you will be able to see the harvested email.

Hope this will help you to use your Web Content Extractor as a Email Scraper.. Share your view in comment.

Source: http://webdata-scraping.com/use-web-content-extractor-as-email-scraper/