The basic idea for this system is you assign judgment skill ratings to people you know in various disciplines and you also publish your own ratings on products/people/ideas. Based on who you trust and how much you trust them in different areas of judgment, this system will compute rating scores using your web of trust network on potentially any subject where people form opinions.
_ @Blocktrades
Entrusting the future
This quote is regarding the development of a web of trust system for Hive and Like BT, I also find this area very interesting, as I think that it is going to become a standard practice in our world, where we are going to essentially be rated on our trustworthiness, which will affect things like our voting weight in elections. Yes, potentially dystopic in many ways, but I also think that the current systems for national governance are very flawed where everyone has a vote, regardless of their background or understanding.
The entire pool is further muddied by the internet and its ability to change perceptions with what can effectively be baseless information or, disinformation. We can see this in many areas and topics, where public opinion has been heavily swayed in one direction or another on what is effectively "fake news" without people considering the source of the information, the agenda of the source or the ramifications of believing on blind faith. The social movements are propelled in this manner, all feeding upon the spread of information without any consideration to the authenticity or validity of the information supplied.
One example of this could be in the conspiracy arena, where most conspiracists collecting and spreading information based on the information supplied by others, while believing they are truth seekers. Conspiracists also have a personal agenda, as they maintain a view of themselves and their world that they are trying to reinforce in various ways. What is interesting is that a lot of the information that they use is unverified and meme based- with many disseminating content that has since been proven has come from centralized sources with the express purpose of spreading false information to build social narratives to affect behavior, like the "meme farms" created by the Russian government to affect and disrupt social norms in the US. When a conspiracist is using the "in trend" terms and memes of the day, it is a very good indication that they are spreading information designed to be spread by people who are smarter than them.
The thing with the internet is that there is very little verification involved, while the stream of information is so fast that even if verification was possible, no one has the time to actually chase it up and make sure it is accurate - or even close to accurate. This is further exacerbated by the ease at which information can be edited and created to seem authentic, while misrepresenting or fabricating the events to skew opinion.
Unverifiable trust
What is interesting with a lot of conspiracists, is that while they believe that there can be something like an organized cabal capable of manipulating everything in the world at a granular level, while still finding the time to eat babies to stay young - they don't think that the same group can manipulate information that will fool them. Essentially, people see the evil in the content curation ability of Google, yet use Google to collect unverified information and treat it as trustworthy. As said, everyone has an agenda and conspiracists are just as prone to confirmation bias as anyone else.
There is no longer the "saw it with my own eyes" approach, which in itself is not a reliable source of information. Many people no longer get input information from the real world, especially when everyone is working remotely, instead gathering information with no possibility to cross-reference or verify, or temper it with a reality. This makes the evidence heavily tainted as the chain of informational custody is completely unknown and most if not all of the data points should be untrusted. Yet, people want to believe in something, so they reduce their barrier needed for plausibility and onboard the information blindly, as it suits the needs of the moment - those needs being personal.
Track recorded
So, this is where I think the web of trust could have an effect as it could essentially start to grade information based upon data sources and sharing points, which would then track the chain of custody of the information, applying a confidence score at each node and at where it appears on the timeline. This means that once the information reaches a node, it would have a new confidence rating applied dependent upon the understanding of the node - which could be based on historical information.
For example, there is a feedback form filled for every training I deliver and out of the 40-odd deliveries so far this year, I have averaged an overall score of 4.6/5, and last year's rating average was 4.5/5 - last year was nearly all face to face trainings, this year's are entirely remote. This means that in this narrow area of training delivery, someone like my supervisor would be pretty confident in my delivery (or at least my ability to get deliveries rated well), as they have access to this information, know the sources of the ratings, as well as know that the information sources are distributed across multiple companies and locations globally. Due to the systems we use, they also know that the information provided can't be faked by me.
This is historical information that can be used to predict the future with a level of confidence - a proven track record. Most of the information put out on the internet doesn't come with a proven track record, it is individual pieces of information or information gathered around an idea, designed in a way that encourages virality and stickiness, as it targets pain points of individuals who generally identify with some kind of demographic - whether they realize it or not. Most of the history of the information and the filters through which it has passed is untraceable by each point of exposure.
Social proof is not proof
This means that information coming through these skewed individuals and gathering around topics and groups, will lose relevance as it is skewed data, but gain importance as a movement as social proofing takes effect. It doesn't matter whether on the left or the right, truth-seeker or the most gullible, everyone is prone to being manipulated and I would predict that the most manipulated are those who believe that they are smart enough not to be manipulated - as they tend to be confident in their abilities even though there is very little verification that ever takes place.
Have you ever misjudged somebody? Ever met them, thought one thing and then found out later you were wrong? How did you find out you were wrong?
Now, every time we walk down the street and see someone, we make a judgement and because we never see that person again, our judgement stands, which means we get the sense that we were correct in our evaluation, even though there was no validation process. Because of this, we judge ourselves right more than wrong - since the few times we are wrong (based on some kind of verification through new information) are heavily outweighed by all the times we felt right (with no verification).
This same process happens on the internet with every bit of information we absorb, with the vast majority of it going unverified and in the case with many conspiracy theories, they are unfalsifiable. Then, we tend to silo ourselves into topics where we are more likely biased to believe what we already believe and reject what goes against our beliefs. This means that no matter what information arrives to discount the confidence in our beliefs, we are going to get more information that supports them.
Full metal confidence
What would be interesting to see is if there was full transparency on the chain of information custody, how trusting would we be of the nodes as sources of information as individuals, especially since the majority along that path of dissemination are going to be completely unknown, like the egg-faced Trump supporters on Twitter, sharing memes generated in those Russian meme factories, and having millions of real people push that information further as if it is valid and verified. If there was a flag on each account or meme that with 100% accuracy was able to tell the original source, would the information be trusted?
Trust is something that we have developed as a species in order to function as a community and society - but the way and likely the speed at which we have moved from small tribes to a global community all within informational reach and all connected as an economy, means we have lost the ability to use our judgement to trust, as we no longer have the opportunity to get the information straight from the horses mouth so to speak. Instead, we have put our trust in unverifiable information sources which are creating and spreading information that furthers their own agenda in some way, whether it be for personal validation with respect to their social group norms, attention seeking or for something like votes on Hive.
Uncovering the untrustworthy
In a world where there was an accurate web of trust, I believe that very quickly we would see how untrustworthy the sources of a lot of the information we use is. What this means is that within narrow groups, there is a lot of people who actually don't have any validity whatsoever, but build a position by collecting unverified information that panders to the desires of the group they are manipulating. I think that this is especially true where a lot of the information is unfalsifiable, like within religions and conspiracies. It is not that all the information is wrong, but group acceptance doesn't make the information right either, even if 100% are believers.
While people can say, "trust no one" - the fact is that we have to trust all kinds of people, most of which we will never even know exist, otherwise we can't function in this world. When we buy a new car, we trust that someone attached the breaks. When we open a bag of chips, we trust no one tipped a vial of cyanide in there. When we open a bank account, we trust that they aren't going to run off with our money.
When we act on information we have received, we trust it is correct.
Often, we find out are judgement was wrong.
Taraz
[ Gen1: Hive ]