Google and OpenAI offer free API credits—but there’s one catch

When I started building my own AI features in my home lab, I avoided using the AI API like the plague – I had always heard how expensive it was. It was only a few months ago that I realized that Google and OpenAI offer free access to their APIs with a (not so small) catch.
You don’t need to pay Google or OpenAI to use their API
You only pay for intensive use
You might think that API access to AI is expensive, and you’d be right. But Google’s Gemini and OpenAI offer free tiers for both of their respective models.
Free access to Google’s API is actually the easiest to get. It’s simple and Google tells you exactly which models are available for free or not. What Google doesn’t tell you very clearly, however, is what the free usage limit is on these models. It’s a bit more ambiguous than OpenAI, but the models are still ultimately free to use.
OpenAI, however, has their free API access a bit more hidden, but they are more open about the fact that all access is truly free. To get free access to the OpenAI API, simply sign up for an OpenAI Platform account (not a ChatGPT account), then go to the Settings > Data Controls area of the OpenAI Platform.
There you’ll see a few toggles, but the only one that matters is the “Share input and output with OpenAI” toggle at the bottom. If you enable it, you’ll get free daily use for a number of OpenAI models.
For example, you will get 250,000 tokens per day on some of OpenAI’s most powerful models, like GPT-5.2, GPT-5.1-Codex, etc. For lighter models, like GPT-5.1-Codex-Mini, GPT-5-Mini, GPT-5-Nano and others, you get up to 2,500,000 tokens per day for free.
250,000 tokens for the most complex designs and 2,500,000 tokens for the lighter designs may not seem like a lot, but you’d be surprised what you can actually do with that many tokens. However, it should be noted that if you exceed these limits, you will start paying for tokens at the current rate, regardless of which model you use.
API access to LLMs is actually very useful
From Open WebUI to creating custom automations, your homelab will benefit
So why is free API access important? I’m glad you asked. With API access to AI, you can use it however you want, not just in a web chat app.
For example, n8n can use AIs via API for a wide range of tasks. I started implementing AI via API in my n8n for things like content generation, ideation, etc. You can ask n8n to do something like monitor your website with a ping, then if it stops responding, the AI takes over to try to diagnose what’s going on and sends you a message.
You can also configure n8n to use AI to help you take an idea you have for a YouTube video, research the idea, and come up with a startup script for the video. You can do a lot with API access to AI models.
Another way to use API access to AI is to use Open WebUI. You might wonder why using Open WebUI would even be necessary with models like GPT-5.2, but there’s actually a very good reason to use your own AI chat interface.
With Open WebUI and free access to GPT 5.2, you can use the most powerful OpenAI models without paying a ChatGPT Plus subscription. This allows you to try (or just use directly) the best models without having to pay a cent.
Open WebUI also lets you set custom system prompts both app-wide and per-chat. For example, I have a specific chat with a custom system prompt to format a plan. I write my articles in Markdown, but I make my plans in markup. So I take my markup plan and paste it into the custom thread, and Open WebUI takes and removes everything except the top level headers, then writes the plan using sentence case in Markdown for me.
This is something I used to do manually, but with Open WebUI it now only takes a second. I’ve tried using normal ChatGPT or Gemini chats for this in the past, but it was often confusing and didn’t give me the exit code with Markdown like I needed – a custom system prompt fixes this, and there’s no other easy way to do this than with Open WebUI and free API access to ChatGPT AI models.
There is always a trap
Your chats aren’t private, but they’re not private without the API
With anything free, there’s always a catch, and free access to AI APIs is no exception. Often with API access to AIs, you can choose not to share your input/output with the AI company. Of course, anything you send to OpenAI’s API must reach their servers, but you have the option to opt out of storing or using that data for training via the API.
Get free To access the API, however, you must agree to share this data with OpenAI. This might be a deal breaker for you, and I understand, but let me lay it all out on the table here: your discussions are not. Already private. When you use the ChatGPT or Gemini web applications, the data shared there is already used for training and data retention purposes.
So if you use web applications correctly, you should be able to use the API and share input/output in the same way. Of course, any time you want to stop sharing, just turn it on and start paying for API usage. It’s nice that it’s an option that you can turn on and off at will.
OpenAI even lets you choose whether you want to share a single project or all projects with them, but only the shared project will have free access to the API.
Nothing in life is truly free, but that’s okay
Ultimately, free API access to AI models will never truly be free. However, I think sharing chat input/output via the API suits my use case, as well as most people’s. If you really want privacy, you’ll just have to pay for it.



