This week I've released a small utility application for macOS that enables users to query Anthropic's LLM API (aka Claude) from any text field in the OS. What motivated me to code this app was that in my daily workflow I found myself using Claude quite often to fix small typos and grammar when writing in my non-native language. I thought "what if I could just highlight a portion of the text and ask Claude to review it with a shortcut?".
If you are curious, you can check it out. It's free, and you just need a valid Anthropic API key to use it. And if you do, I'll be very happy if you could spare some time to send me feedback on how it was using it.
As the app needs to be able to handle shortcuts and capture the highlighted text in text fields, Apple asks that users grant a special permission via the accessibility panel in the OS settings. You have to explicitly grant this permission as it involves having greater – and possibly dangerous – access to your machine.
This raised some important questions from users:
The problem
"How can I trust this app if I don't have a way to validate if it is not recording my activity to do something sketchy with my data?"
In the case of my app, it only sends the data to Anthropic's API endpoint, but it is more common that apps intermediate this communication via a middleware endpoint. This happens for various reasons. For instance, Cursor, a popular AI coding editor, provides its users a way of using a combination of different AI Models without needing to configure and subscribe to each platform individually. To be able to do that, when users submit their data to be queried, it first sends it to Cursor's endpoint, which in turn sends it to the AI services. It also does this to be able to monitor the users' usage and charge them accordingly.
It is a good security practice to monitor your app's communication, especially if the application – or platform – provides free usage from a paid service. There are many services emerging that act as "AI Hubs" with free usage of paid services, just like free VPNs.
In the case of free VPN services, it is public knowledge that they sell your data or use it to show you more advertising. But in the case of these AI apps, they could also be feeding your data to a background provider that you don't have any control over.
The solution
The best way to be sure is to check with a network monitoring tool like Little Snitch or Charles Proxy – both for macOS, in this case.
If the application is open source, it's a good practice to check the source code, browse through the issues and topics opened by other users and – if you have the skill – to read through the code.
There is no definitive and 100% guaranteed safe solution, as with any other aspect of information security, but following these steps you can be more safe and assured that the applications you're using are being honest about how they handle your data.