Notes about authority and evaluating sources 2

Metzger, M. J., & Flanagin, A. J. (2013). Credibility and trust of information in online environments: The use of cognitive heuristics. Journal of Pragmatics, ?. 
http://dx.doi.org/10.1016/j.pragma.2013.07.012  

Digital media use the same cognitive skills and abilities for evaluating credibility, but they are called into use much more often. 

In digital environment, disintermediation removes or calls into question the experts, opinion leaders, and information arbiters that we traditionally relied on to judge credibility for us. 

Source credibility (believability, trustworthiness of the speaker) versus information credibility (believability, trustworthiness of the message.) Can evaluate either or both.

When it took a big investment to put information out there, there was more of a meritocracy – no one would invest in or stake their reputation on information that didn’t satisfy some standard of theirs. But now the barriers to putting information out there are much lower, so you can’t assume that information has met any standard at all. 

Online information may lack author identity information, or have false information. It may also be co-authored, a derivative work, aggregated, etc. Creates uncertainty about who is responsible for the information.

Information online is easy altered and alterations are hard/impossible to detect.

Different types of content are blended more imperceptibly online (advertising and information, for example.) Sponsored links, embedded ads. Similar format is interpreted as similar levels of quality.

“Context deficit” online. Hard to notice/remember whence/from whom information came.

Have to evaluate credibility at many levels – information, individual author, site, organization…

Recommend checking: Accuracy, Authority, Objectivity, Currency, Coverage/Scope of the information and/or the information source

People rarely evaluate information sources and instead decide based on web design and navigability (it’s easier/faster/more convenient to make decisions that way.)

Cognitive heuristics used to minimize cognitive effort and time spent on information search and evaluation.

Sundar (2008) first proposed that heuristics concerning the content and technology guide evaluation of credibility.
– modality (text, audio, video)
– agency (perceived source)
– interactivity
– navigability

Metzger (2010) found that in information rich environments, searchers use heuristics involving:
– reputation: prefer recognized alternatives over unfamiliar ones. Especially known “official” authorities.
– endorsement: prefer alternatives recommended by known others, or by the crowd.
– consistency: information is same across different sources; although checking may be superficial. 
– self-confirmation: prefer alternatives that agree with what you already believe over those that don’t. People tend to be even more biased when searching online because they don’t have time to deal with the information overload. 
– expectancy violation: if a web site fails to meet any sort of expectation (even if it just provides more information than requested) it is judged as not credible. Usually bad design, navigation, spelling, grammar, or appearance.
– persuasive intent: favor information that appears unbiased. Defense against feeling of being manipulated – try to detect ulterior motives. People tend to avoid commercial information. 

False equation between popularity and credibility. If something is unpopular, it doesn’t mean it’s not credible. If something is popular, it might be credible, or it might just be a really good manipulation.

 

 

 

 

 

 

 

 

 

Notes about authority and reliability in evaluating sources

Lankes, R. D. (2008). Credibility on the internet: shifting from authority to reliability. Journal of Documentation, 64, 667-686. http://dx.doi.org/10.1108/00220410810899709

Holy cow, this is a good article! It is available in a repository – you can find it through Google Scholar.

p. 668 – “Paradox of ‘information self-sufficiency’
(What’s the paradox? People are increasingly forced to be self-reliant when it comes to finding and managing information and performing information tasks…

p. 671 – but they are also increasingly dependent on the sources of the information they get, with reduced clues as to their credibility.)

p. 668 – This paradox leads to shifting of credibility tools and techniques from authority model to reliability model

p. 669 – Credibility = “the believability of a source or message, which is made up of two primary dimensions: trustworthiness and expertise” (Flannigan and Metzker, 2007 in MacArthur 2007).

p. 669 – Credibility is not inherent or determined by the information source, but determined by the information user.

p. 670 – In digital medium, information is disconnected from its physical origin and interactions are mediated. This reduces clues about credibility. The only evidence is the information itself.

p. 671 – Pask’s (1976) conversation theory – knowledge is created in conversation (between individuals, groups, even individual and a static information source, between an individual and themself…)

p. 672 – Mediating tools (hardware, software, infrastructure) can provide clues (accurate or not) about the credibility of information sources. E.g. load time of web pages, how graphics are displayed. Judgements not necessarily conscious.

p. 673 – At the infrastructure level, institutions (libraries, schools) can block resources and services so that they appear to just not exist or to be down. At the application level, filters can mark emails as spam or sites as dangerous, and the algorithms aren’t necessarily very fine-tuned. At the information service level, Google skews search results based on popularity and weighting, and also the user’s interests.

p. 674 – “To prepare users to make fully informed credibility decisions, they must either become truly fluent in the technology of digital networks, or become aware of potential biases in the network technology itself… what is the role of users in determining the unavoidable biases and manipulations in the underlying network itself?”

p. 675 – Old model of software credibility rested on reputation of the software firm for quality, good practices, clean track record. Security depends on secrecy – few people know about weaknesses. New model of software credibility rests on openness so that weaknesses can be identified and solutions debated and tested.

p. 676 – Lankes says that with enough training, the user has access to test the tools and develop new ones. But it seems to me that that training is an investment in time and money that most people can’t afford to make! Most of us still need to depend on somebody else’s expertise and experience. It’s just not as uncomplicated a dependence as before.

p. 676 – Information services increasingly participatory. Users contribute to conversations about information artifacts, creating new, fluid information artifacts.

p. 677 – “Increasingly users are looking to user-submitted comments, editorial reviews, and open conversations on a given topic, artifact, or idea to determine trust and expertise.”

p. 677 – Astroturfing – false grass roots sites, false reviews. Harder to determine the credibility of any one piece of information.

p. 678 – Authority – a trusted source vouches for a piece of information. Different sources are trusted in different contexts. A source becomes authoritative by developing the trust through coherence and consistency (internally and externally.)

p. 678 – “This new paradigm is not without authority, but it does require more sophisticated methodologies for evaluating it” (McGunines et al., 2006; Nikolaos et al., 2006).

p. 679 – “The problem of determining the credibility of internet-based information is not a crisis of authority, but rather a crisis of choice. There are simply more choices in whom to trust…”

p. 679 – “Many want the library to become a preferred provider of information. Yet, the concept of ‘preferred’ only works in an authoritarian view when there is someone who can make others prefer or select something over something else.”

p. 680 – “Through this direct access to source data a person can train themselves, formally or informally, until they feel they have sufficient expertise and trustworthiness to credibly interpret the information. Once the user takes it upon himself or herself to become an authority by directly evaluating and synthesizing often raw information, authority ends, and ‘reliability’ becomes the predominant form of credibility assessment.”

p. 680 – Reliability – your information is consistent with reality, and that is consistent over time. Reliability is a path to authoritativeness. A lack of reliability can destroy authoritativeness.

p. 681 – “If someone consistently gives out accurate (and testable) information in the absence of countervailing factors, they are seen as an authority.”

p. 681 – There’s also reliability by consent, as when a group decides that one information source will have the information that is used to cross-check all other information. For example, the Library of Congress file of book authors’ name, birth, and death info.

p. 681 – “the tools built for users to find and use credible information must be increasingly participatory and facilitate reliability approaches.”

TED Talks – my cure for burnout

I have been struggling with two interlocking projects for almost a year now, and between that and the fact that my life has been stressful as hell this summer, I have a raging case of burnout. Today I had a choice between staring miserably at my screen typing a few sentences painfully over the course of an hour before having to delete them because they suck… or doing something else.

Tumblr sang its siren song but in a heroic feat worthy of songs or at least chocolate, I resisted. Instead, I started to watch TED talks about metaliteracy, trying to refill that reservoir of ideas with things that I can connect in interesting and insight-giving ways… because my two interlocking projects are a larger information literacy self-paced course and a smaller (but still not small) set of tutorials on evaluating information sources.

I’m going to share the good ones:

Is Metaliteracy really new? Is it feasible?

First, let me say that I am 100% behind the metaliteracy initiative. I think that everything about the emphasis on skills to deal with information overload; taking evaluation to a higher, more critical level; and moving beyond finding, evaluating, and using to a more participatory culture a la Henry Jenkins, is spot on.

I think that the ACRL information literacy competency standards have needed to be revised for a while and there’s nobody better than Trudi to do it.

But is Metaliteracy actually something new? Or is it just a new way of making sense of things we’ve already known? This is my driving question for this MOOC.

A second, also pressing question: Is Metaliteracy something practical for every library and every librarian in every instructional situation? When I’m struggling to get students to use a scholarly article or two, and I don’t have much institutional backing or staff, do I baby step towards Metaliteracy? Jump in whole-hog and let the details sort themselves out? Leave it for a better future and keep plugging away at those basic component skills? (I hope not, because how the heck are we supposed to get to a better future by doing the same broken thing we’ve been doing?)