Reflections on Trusting Algorithm
You open your phone, you open Twitter (now X), you'll be served some news and opinions from people. Maybe some funny Tweets (now Posts) once in a while.
Open up YouTube, you'll see a funny video on your front page, how fun, after that from the “Watch Next” page you'll click the next video essay, it's a clip of a podcast, it was quite insightful, you heard something you didn't know.
You looked up what was that thing you heard about in that podcast on Google, clicked the first link... not quite enough, the second... OK... you kinda get it... It's now time to work, you're glad your morning is somewhat productive
This is your typical person's morning. Nothing wrong with it, until you notice something interesting, most of the information fed to us are picked by an algorithm.
In 1984, Ken Thompson wrote “Reflections on Trusting Trust”. In it, Software Engineers at the time are slapped with the fact that unknowingly, there is an implicit trust given by programmers to the compiler and the system in which their code runs. I won't rob you of your first time discovery of paper, if you've never read it, please do. After you've read or watched through it I want you to consider aforementioned “typical morning” scenario.
In this blog post I want to raise concerns on us as society treats information and their “compilers”, that is, the algorithms.
So many of us unknowingly gave trust that these systems and algorithms are serving us “true” information, and not informations presented in a way that is poisoned. These algorithms runs in a black box, created and tweaked by companies with incentives to maximize profits. These algorithms are entrenched soo deeply in our usage of technology to get information that we usually don't even consider it!
Consider “Googling”, an act to search for “true” information on the internet that is considered the bare minimum action, even that is plagued by a ranking algorithm controlled and owned by Google. Who is to say, that Google, is never malicious actor in manipulating these search rankings?
Of course, as Ken Thompson said, you can't trust code that you did not create yourself, to an extent. That is why there are things like dependabot that checks for supply chain issue, a whole community of security engineers checking compiled code to find malicious behaviour, that's exactly why we love open source software because we can check the code itself.
The point of this blogpost is not to fearmonger. Not to say that you should never trust any algorithms. The point is just to highlight the kind of implicit trust that should not be given carelessly to these algorithms owned by companies with not so noble motives. I think they should be under more scrutiny, especially with the rise of LLM, that also, controlled by for-profit companies.