Kaushik argues that, if used correctly, web analytics can be used to answer four important questions: What, How Much, Why and What Else.
• The What: Clickstream Analysis, defined as the route visitors take on the website when they navigate by clicking …show more content…
Too much data may create noise and may hide within it important information that may not be easy to find because the almost-infinite amount of inputs that can be tracked. This paradox reminds of the pointillism art in which being at the right distance makes the picture appear clear and beautiful before our eyes, our brain makes sense of the pattern and literally connects the dots to perfectly perceive a still nature, a starred field, a historical figure, etc. But being too close makes us lose focus of the real picture and gives us only small points of paint which do not reflect what the author expected nor have they any …show more content…
The webmaster has to make a decision about how fast he wants the page to load and how explicit he wants to be in the indexing of his site. For this purpose he has to ponder certain elements: what size and what resolution will the site use in its images and videos? What is the internet speed that is ideal for this site to load? Is this the internet speed that my target user has? Most likely some level of compromise will be necessary in order to provide the best experience for most users.
Search Robots or Web Crawlers are tangentially related to the concepts discussed above since they have the task of systematically indexing the web. In plain words, web indexing serves the purpose of making available for regular users the websites that are public. Search robots use their software to update the content they find on the web by copying pages and processing them. This is done so that users can find the pages faster and effectively4 even if the pages are offline.
One example of Web Crawling is Google’s indexing tool which offers its users the ability to be included in its index and to provide specific instructions to Google by using a “robots.txt” file that tells the search engine how to process each page on the web