It is finally time that the U.S. government steps in to challenge multi-billion dollar companies for doing suspicious things that harm the general public. This time, it seems that the government chose a case it could easily hold against Meta; it is difficult to defend oneself against a claim of discrimination of a protected class if that is exactly what an algorithm designed by Meta actually does. Hopefully steps taken in this direction will force companies like Meta and other giants to keep a careful watch on their practices as the government becomes more and more confident with their ability to throw heavy lawbooks at billionaires.
AAG Clarke made a brilliant comment that now commits the U.S. government’s Justice Department to holding tech giants responsible for perpetuating systems of oppression. However, the use of the word “unlawfully” in that statement makes me question the extent to which the government is actually committed to this end. There are several instances of people, especially marginalized people, being harmed by the practices of tech companies, yet the government doesn’t prosecute such cases as aggressively as it seems to be in this case.
According to Slate’s Evil List which is a collection of tech companies compiled by the respondents of a survey designed and distributed by Slate. One example of a tech company harming people is the case of Vigilant Solutions, which is a company that has data and analytical systems for the purposes of surveillance, and their cooperation with ICE to track the license plates of undocumented citizens. This needs no further elaboration, as this is very clearly an instance of the U.S. government working with a private tech company to go after people, and it is only justified by the pursuit being in the name of the law. Verizon’s assault on net neutrality in the interest of making money is also rightly why it gets placed as number 16 on the list. And we do not argue with Amazon being right up there at number one.
When these companies are facing consequences for their dubious practices, the things said about the cases are strange. For example, in NPR’s article, they use the word “improperly” to describe third-party advertisers’ use of the algorithm while one might wonder why rather immaterial characteristics like race can be altered to target (or not target) such specific groups of people. Though let’s give credit where it’s due: the government is really good at investigating anti-trust cases.
The “door-slamming in the face” analogy was interesting; we always forget that the things we do digitally are no different than the things we do in-person. Advertising done online is very much simply the same as going around and taping up posters or paying for billboards. The only difference is that the internet is nowadays much more readily accessible.