Monday, July 28, 2008

Google Clocks 1 Trillion unique web-content pages

http://www.ohgizmo.com/wp-content/uploads/2008/07/gooooogle.jpg

We know that Google is huge tracking every bit of "internet" with its huge "google-ware".
I happened to read this big news yesterday when Google announced in their blog that "systems that process links on the web to find new content hit a
milestone: 1 trillion (as in 1,000,000,000,000) unique URLs on the web
at once!
"

Now interpreting how big is 1 trillion is insane... but as they say... its big... and i am sure that there are plenty more pages that are waiting to be "indexed". You are welcome to read the original article to know how they track these pages and how things "actually" work at Google.

Accuracy?
While its great that google is tracking every-bit of information, i am wondering how they keep it up-to-date? There is perpetual internet-decay (internet rot or link rot) where the links disappear, the websites shut-down, sites are hacked or people simply change their addresses. I am wondering what google does with the cached data?


http://www.orangeinks.com/wp-content/uploads/2008/04/googlolopoloy.gifDownside?
Now its been proved that being indexed by google is a "previledge" and not a "right". While i am a big lover of Google, it also scares a bit as we are moving towards a clear-cut monopoly. Any system that becomes too big to compete with, will necessarily bring injustice to a small group of people... hope i am wrong here.

Sphere: Related Content

No comments:

Bookmark and Share
 
Clicky Web Analytics