Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Open Source News

Common Crawl Foundation Providing Data For Search Researchers 61

mikejuk writes with an excerpt from an article in I Programmer: "If you have ever thought that you could do a better job than Google but were intimidated by the hardware needed to build a web index, then the Common Crawl Foundation has a solution for you. It has indexed 5 billion web pages, placed the results on Amazon EC2/S3 and invites you to make use of it for free. All you have to do is setup your own Amazon EC2 Hadoop cluster and pay for the time you use it — accessing the data is free. This idea is to open up the whole area of web search to experiment and innovation. So if you want to challenge Google now you can't use the excuse that you can't afford it." Their weblog promises source code for everything eventually. One thing I've always wondered is why no distributed crawlers or search engines have ever come about.
This discussion has been archived. No new comments can be posted.

Common Crawl Foundation Providing Data For Search Researchers

Comments Filter:
  • by CmdrPony ( 2505686 ) on Monday November 14, 2011 @09:30PM (#38054962)
    But it's still a long way to go. They seem to have archive of what they have crawled. That's it. You processing all those pages on EC2 is still going to be extremely costly and time taking.
    • Oh, and that is obviously for only simple stuff like what links to what. Google, Bing and other search engines are much, much more complicated than that now. And you don't have access to the usage and keyword data that Google and Bing have because of their enormous amount of users.
    • by Gumber ( 17306 ) on Monday November 14, 2011 @10:11PM (#38055206) Homepage

      Bitch moan, bitch moan. If I had a need for such a dataset, I think I'd be damn grateful that I didn't have to collect it myself. As for the cost of processing the pages, the article suggests that running a hadoop job on the whole dataset on EC2 might be in the neighborhood of $100. That's not that costly.

      • Re: (Score:3, Interesting)

        by CmdrPony ( 2505686 )
        To be honest, if I wanted to work on such data and didn't have lots of money, I would actually prefer collecting it myself. Sure, with EC2 I can easily put more processing power and process it quickly, but I can get dedicated 100mbit server with unlimited bandwidth for around 60-70 dollars a month. It also has more space and processing power than EC2 at that price, and I can process the pages as I download them. That way I would build my database structure as I go, and I'm guaranteed with fixed cost a month
        • Seriously, the EC2 cluster is already there, setting it up will cost you lots less than building it up from ground. Time costs money too on this planet. Also, most importantly, your 80 dollar box is not going to be able to store metadata on 5 billion web pages and process it at any reasonable IO speed at all.

          Go build your own processing cluster and see how long it takes you to do that for less than what EC2 would charge. Once you're finished, you could make a business out of it and compete with Amazon. Th
          • Or not.

            If you're an academic, running a single hadoop job like that is not as useful as it sounds. In research, you never know what you want until you do something and realize that's not it. To write a paper you'd want to run at least 10-20 full jobs, all slightly different.

            Luckily, lots of unis have their own clusters (aka beowulfs - I can't believe I have to point that out on slashdot...). It would really be great if the data could be duplicated so people could run the jobs on their own local setups.

        • I can get dedicated 100mbit server with unlimited bandwidth for around 60-70 dollars a month

          No actually you cant.

    • I wonder how big of a torrent file that would make....

  • I mean, hosting the stuffs on Amazon server is one thing - it gonna have to be hosted somewhere, but the thing that I feel uncomfortable is that if anyone wants to do any research on the info they end up have to pay Amazon.

    Hmm ....

    • Must be a conspiracy set up by Amazon to get people to pay for vast amounts of compute time. Why now allow people to purchase copies of the data on hard disk or tape. 5 billion pages, at 100K each (high estimate perhaps) is 500 TB. If you zip it, you could probably get it down to 10 TB if you compress it with a good algorithm. Not "that much" if this is the kind of research you are interested in.
      • I just did a calculation in Amazon EC2 site, and seems like you can micro instance for practically free for first year. With 15GB storage and 10GB out bandwidth it costs like $0.40 a month, and nothing if you take just 10GB storage. Guess you could do some simple stuff with that.
      • by Gumber ( 17306 )

        A conspiracy? You're going to have to pay someone for the compute time. It's not like a lot of people have big clusters lying around, so lot of people are going to opt to pay Amazon anyway.

        As for selling access to the data on physical media, it doesn't look like there is anything to stop you from taking advantage of Amazon's Export Service to get the data set on physical media.

      • Must be a conspiracy set up by Amazon to get people to pay for vast amounts of compute time. Why now allow people to purchase copies of the data on hard disk or tape. 5 billion pages, at 100K each (high estimate perhaps) is 500 TB. If you zip it, you could probably get it down to 10 TB if you compress it with a good algorithm. Not "that much" if this is the kind of research you are interested in.

        How much would that tape and tape drive or hard disk cost you to get started? How would that cost compare with the initial 750 hours of free compute time on EC2?

    • by Gumber ( 17306 )

      I don't get it. You are going to have to pay someone if you want to do any research on it. If you don't want to pay Amazon you could either crawl the data yourself, or pay the cost of transferring the data out of Amazon's cloud.

    • by zill ( 1690130 )

      I mean, hosting the stuffs on Amazon server is one thing - it gonna have to be hosted somewhere, but the thing that I feel uncomfortable is that if anyone wants to do any research on the info they end up have to pay Amazon.

      Hmm ....

      So you expect the researchers to Fedex you 100000 2TB harddrive to you upon request? We're talking about 200 petabytes of data here. It's gonna take forever to transfer no matter how wide your intertubes are. A shipping container of harddrives is literally the only way to move this much data in a timely manner.

      Since there's no easy way to move the data, it only makes sense to run your code on the cluster where the data is currently residing at.

  • Interesting, however (Score:4, Interesting)

    by CastrTroy ( 595695 ) on Monday November 14, 2011 @09:31PM (#38054976)
    Interesting, However, wouldn't one need to index the data in whatever format they need in order to actually search and get useful results from it? You'd need to pay a fortune in compute time just to analyze that much data. It say's they've indexed it, but I don't see how that helps researchers who will want to run their own indexing and analysis against that dataset. Sure it means you don't have to download and spider all that data, but that's only a very small part of the problem.
    • by Gumber ( 17306 ) on Monday November 14, 2011 @10:08PM (#38055192) Homepage

      It may or may not be a small part of the problem, but it isn't a small problem to crawl that many web pages. This likely lets people save a lot of time and effort which they can then devote to their unique research.

      Maybe it will cost a fortune to analyze that much data, but there isn't really anyway of getting around the cost if you need that much data. Besides, for what its worth, the linked article suggests that a hadoop run against the data costs about $100. I'm sure the real cost depends on the extent and efficiency of your analysis, but that is hardly "a fortune."

      • It may or may not be a small part of the problem, but it isn't a small problem to crawl that many web pages.

        Indeed, and there are more crawlers on the net than might be commonly supposed. Our home site is regularly visited by bots from Google, Bing, and Yandex, and occasionally by several others. The entire site (10s of GB) was slurped in a single visit by an unknown bot at an EC2 IP address recently. That bot's [botsvsbrowsers.com] user-agent string was not the same as the string used by the Common Crawl Foundation's bot.

    • You're absolutely correct - although if they do have it indexed, it's certainly much easier on the researchers. Actually - I worked on this project: http://lemurproject.org/clueweb09.php/ [lemurproject.org] ... and I can tell you first hand, not only is it not easy to crawl that much data, but then to index it, it takes not only time but computing muscle, and lots, lots, lots of disks. It took us roughly 1 and 1/2 months to collect the data using a Hadoop cluster with 100 nodes running on it, and then roughly 2 months of com
  • It should be obvious (Score:5, Interesting)

    by DerekLyons ( 302214 ) <fairwater@@@gmail...com> on Monday November 14, 2011 @10:02PM (#38055154) Homepage

    One thing I've always wondered is why no distributed crawlers or search engines have ever come about.

    Because being 'distributed' is not a magic wand. (Nor is 'crowdsourcing', nor 'open source', or half a dozen other terms often used as buzzwords in defiance of the actual (technical) meanings.) You still need substantial bandwidth and processing power to handle the index, being distributed just makes the problems worse as now you need bandwidth and processing power to coordinate the nodes.

  • by quixote9 ( 999874 ) on Monday November 14, 2011 @10:09PM (#38055198) Homepage
    Google's way of coming up with pageranks is fundamentally flawed. It's a popularity test, not an information content test. It leads to link farming. Even worse, it leads everyone, even otherwise well-meaning people, not to cite their sources so they won't lose pagerank by having more outgoing links than incoming ones. That is bad, bad, bad, bad, and bad. Citing sources is a foundation of any real information system, so Google's method will ultimately end in a web full of unsubstantiated blather going in circles. It's happening already, but we've barely begun to sink into the morass.

    An essential improvement is coming up with a way to identify and rank by actual information content. No, I have no idea how to do that. I'm just a biologist, struggling with plain old "I." AI is beyond me.
    • I should add: one ought to be actively rewarded for citing sources. Definitely not penalized.
      • So, how exactly would you fix it? How do you determine what is good information, or relevant results? How do you rank them? Please describe your algorithm.
    • by Twinbee ( 767046 )

      Surely it would be possible to tweak the algorithm so outbound links don't detract from the site, and keep things mathematically sound?

    • by GuB-42 ( 2483988 )
      Pagerank is just part of the picture. Google use many other metrics to rank websites, but these metrics are kept secret. Also I don't know why having more outgoing links than incoming ones harms your pagerank. Your links will likely be less valuable for the referenced sites but it shouldn't change anything for you. And if you don't want to make your links trackable by google, just use "nofollow". Manipulating search engine results is not that easy.
  • I'm not trying to be mean, but just stating the facts. Out of the "billions" of crawled webpages, even common search phrases come up with results that are only a fraction of what can be pulled from a standard search with google, yahoo, bing, etc. That's not to say that this project is not without its merits. It's a good idea, but I believe its developers are starting in the wrong place. The real money to be made from this kind of undertaking is NOT to create a better search engine. This kind of project
  • Wait, what? (Score:5, Interesting)

    by zill ( 1690130 ) on Monday November 14, 2011 @10:49PM (#38055404)
    From the article:

    It currently consists of an index of 5 billion web pages, their page rank, their link graphs and other metadata, all hosted on Amazon EC2.

    The crawl is collated using a MapReduce process, compressed into 100Mbyte ARC files which are then uploaded to S3 storage buckets for you to access. Currently there are between 40,000 and 50,000 filled buckets waiting for you to search.

    Each S3 storage bucket is 5TB. [amazon.com]

    5TB * 40,000 / 5 billion = 42MB/web page

    Either they made a typo, my math is wrong, or they started crawling the HD porn sites first. I really hope it's not the latter because 200 petabytes of porn will be the death of so many geeks that the year of Linux on the desktop might never come.

    • 42 MB is not really that big for a "modern" webpage. People put a lot of images on their web pages these days. Add flash apps or forums to that, and many sites get quite big. Text only pages exist mainly in the realm of geeks. When you include sites like IBM, Apple, HP, Dell, etc... you're getting GBs of data.

    • by Anonymous Coward

      200 petabytes of porn

      We need a mascot for such an invaluable resource. I vote we call it Petabear

    • Not filled buckets, filled 100MB files. So their data takes about 4-5TB storage space.
      • ....so how much would it cost (dollars) to run a single map-reduce word-count against that?

        Also, why not do torrent thing. e.g. 100gig torrent dumps, with more updated on regular basis?

        • Because:

          a) They'd have to pay to seed it

          b) The data changes frequently (it is a web crawler after all)

          c) Not everyone has servers necessary to process that much data, while anyone can use hadoop on amazon

    • by Anonymous Coward

      Hi, sorry, there is a typo on the CC website. There are currently 323694 items in the current commoncrawl bucket (commoncrawl-002), and each file is very close to 100MB in size( the total bucket size is 32.3 TB). There are also another 132133 items in our older bucket, which we will be moving over to current bucket shortly.

    • There are 40,000-50,000 buckets that contain one compressed 100Mbyte ARC file each which equals to 4-5TB of total data. So, 5TB / 5 billion pages = 1KB of compressed data per page.
    • If there were "40,000 - 50,000 filled buckets" at 5TB per bucket that would mean: 5TB * 40,000 = 200,000TB or 200PB. That doesn't sound reasonable.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...