วันศุกร์ที่ 27 มกราคม พ.ศ. 2555

Chapter : 9

Good websites and Bias information


Good Websites 

- Are impeccably clean

- Have personality

- Stand out

- Are extremely effective

- Are well thought out and usable

Example





Bias information
Description

When we are trying to make a decision, we generally seek data on which to rationally base the choice. Where this goes wrong, is when we assume that all information is useful, and that 'more is better'.
Sometimes, extra information adds no significant value. Sometimes it simply serves to confuse.

RESEARCH


Baron, Beattie, and Hershey (1988), gave subjects a diagnostic problem involving fictitious symptoms, tests and diseases. Many subjects said they would need additional tests even when they had sufficient data.

EXAMPLE


A manager gets consultants to do a study of the marketplace when a third party report is already available at far less cost.

SO WHAT?


Using it

When you want people to pay attention to your information, even when they have other information you may well be able to present it, for example as 'new findings'.
You can also deliberately create overload by encouraging people to seek more and more data.

Defending


Think first about what information you need and go for that which is just sufficient and necessary.

EXAMPLE 

Bias


     A bias is a tendency. Most biases—like preferring to eat food instead of paper clips, or assuming someone on fire should be put out—are helpful. But cognitive shortcuts can cause problems when we're not aware of them and we apply them inappropriately, leading to rash decisions or discriminatory practices (based on, say, racism and sexism). Relying on biases but keeping them in check requires a delicate balance of self-awareness.


วันพุธที่ 26 ตุลาคม พ.ศ. 2554

Chapter : 8

Search Engines


Three That Are One
Crawler-based search engines are made up of three major elements: the spider, the index, and the software. Each has its own function and together they produce what we have come to trust (or distrust) on the SERPs (Search Engine Results Pages).
The Hungry Spider
Also known as a web crawler or robot, a search engine spider is an automated program that reads web pages and follows any links to other pages within the site. This is often referred to as a site being "spidered" or "crawled". There are three very hungry and active spiders on the Net. Their names are Googlebot (Google), Slurp (Yahoo!) and MSNBot (MSN Search).
Spiders start their journeys with a list of page URLs that have previously been added to their index (database). As it visits these pages, crawling the code and copy, it adds new pages (links) that it finds on the page to its index. As such, one could refer to a spider as feeding an evolving index, which is discussed below.
The spider returns to the sites in its index on a regular basis, scanning for any changes. How often the spider returns is up to the search engines to decide. Website owners do have some control in how often a spider visits their site by making use of a robot.txt file. Search engines first look for this file before crawling a page further.
The Growing Index
An index is like a giant catalogue or inventory of websites containing a copy of every web page and file that the spider finds. If a web page changes, this catalogue is updated with the new information. To give you an idea of the size of these indexes, the latest figure released by Google is 8 billion pages.
It sometimes takes a while for new pages or changes that the spider finds to be added to its index. Thus, a web page may have been "spidered" but not yet "indexed." Until a page is indexed - added to the index - spidered pages will not be available to those searching with the search engine.
Search engine software 
is the third part of a search engine. This is the program that sifts through the millions of pages recorded in the index to find matches to a search and rank them in order of what it believes is most relevant. You can learn more about how search engine software ranks web pages on the aptly-named How Search Engines Rank Web Pages page.


example search engines on the internet






Blinkx Video Search Engine

วันจันทร์ที่ 10 ตุลาคม พ.ศ. 2554

Chapter : 7

Thailand 


#Thai-flood 

 Thaiflood.com intends to be NEWS and donation information center. We are able to notify important informations and corporate all rescues toflood 

victims. We believe Thais don't leave Thais together and the power of spirit never disappear from Thai society. :)


Wedsite to help us in Thailand 


http://jitarsarissara.wordpress.com/tag/help-thai-flood/

https://www.facebook.com/AsaThai?ref=pb

http://www.thaiflood.com/

http://www.facebook.com/thaiflood



Map of Water Flood in Thailand


information.








วันจันทร์ที่ 12 กันยายน พ.ศ. 2554

Chapter : 3



The difference between
Library of Congress Classification System LC. and Dewey Decimal Classification System D.D.C

Library of Congress Classification System ( L.C) is to use the Alphabet to represent the subject in the library. A - Z






Dewey Decimal Classification (D.D.C.) is to use the number 000 - 900 representing the subject in the library.
Link