arrow-left arrow-right brightness-2 chevron-left chevron-right circle-half-full dots-horizontal facebook-box facebook loader magnify menu-down rss-box star twitter-box twitter white-balance-sunny window-close
The rise of AI, crowd sourced, and automated privacy policy analyzers
4 min read

The rise of AI, crowd sourced, and automated privacy policy analyzers

The rise of AI, crowd sourced, and automated privacy policy analyzers

I've seen a number of AI and automated privacy policy discussions crop up and I wanted to take a moment to to say a few things about them.  The intention of these services are, largely, to bring awareness surrounding the privacy implications of various companies that many of us use today.   Even with good intentions, the premise itself is flawed when viewed through the lens of privacy and should never be used as a substitute for your own research and critical thinking.

So, let's jump right in.  Here's a few reasons why I believe we shouldn't rely on or use websites like these.


Some services give extremely questionable ratings even in the face of privacy violating features and policies.  For example, one service gives Telegram an A+, 105% positive rating even though Telegram explicitly tells us they store, and have access to, all of our data.  Even the quote they pulled from Telegrams policy and put in the big, blue box on the homepage says they keep our data.

Telegram is a cloud service. We store messages, photos, videos and documents from your cloud chats on our servers so that you can access your data from any of your devices anytime without having to rely on third-party backups. All data is stored heavily encrypted and the encryption keys in each case are stored in several other data centers in different jurisdictions.
Only 37% privacy friendly but has an A+, 105% rating

Even though I'm unsure how a product can have a 105% rating, how is this score logically possible when the service collects everything we do when using it?  Not just our IP address or what language our computer uses, but every letter, picture, video, file, document, and audio clip we send through their service is seen and stored by default.  

Where is the accountability from a website that claims they are looking out for our privacy when they are misleading at best?  If they treat Telegram like this, what about all of the other services they rate?  How can we trust those?

Other services give positive ratings because the company tells us they are doing something privacy violating but doesn't dock any points for it actually invading our privacy.  

1Password scored a misleadingly respectable 7/10 even though staff has access to our personal information

The core problem is that this score is based on the fact they tell us they collect the data, not that they collect it in the first place.  Telling us what they store is good but the products and services should be docked for collecting it to begin with.  Scoring based on companies telling us they're invading our privacy skews these services in a more positive light, implying they are safe to use, from a privacy perspective, when they aren't.  For newcomers and people who simply look at colors (green = good) or scores (8.6/10 = good), it's misleading and ripe for abuse.

This service also doesn't dock any points or give any notice for being a closed source program that holds incredibly sensitive information. If they rate services like this based on their words and not actions, how can we trust the other services they scored?

Consistency is also an issue with websites like these.  Here's one where five separate services are graded as a B but have wildly different positive and negative marks.  1 pro/4 cons? B.  4 pros/1 con?  B.  5 pros/0 cons?  B. 1 pro/2 cons/2 neutral?  B.

How can all of these service garner the exact same grade?  Even more so, how are these services all rated as a B?  Almost everywhere in the world, a B is considered above average or good.  What type of scale allows a 5 pro/0 con option to be rated the exact same as a 1 pro/4 con option?

I understand the goal of the people creating websites like these have.  They want to provide something to the world by exposing who the good samaritans and bad apples are when it comes to privacy.  But, this is not something that can be crowd sourced due to promoters with malicious intent manipulating scores, it can't be automated by AI because it doesn't have the ability to understand nuance, nor can these be accurately scored because giving a score or grade is inherently subjective, no matter the scale used.

For websites like these to work, there should be no scoring involved at all.  Maybe a subjective recommendation at the end, but the discussion should revolve around the facts and let the consumer decide what aligns with their ideals.  

I do also understand that some people want the easy, just give me the answer option but sometimes we simply have to spend the 5 or 10 minutes reading and coming to the conclusion ourselves.


Want to join the discussion?  Check out this post, and others, over at the CupWire subreddit and leave a comment.