Visit us at Embedded World 2026 | 10–12.03, Nuremberg, Germany. Read more.
AI, AI, AI – wherever you look, the entire industry revolves around one topic. Is it a revolution or just another buzzword? Answer that question for yourselves :p
But let's not start this article with too much philosophizing. Recently, Qt Company announced the introduction of Qt AI Assistant , a plugin for Qt Creator that allows integration with a selected large language model (LLM).
I couldn't ignore this, so I decided to check how it works in practice. And that's what this blog post will be about. If you're curious about how to set it up, how it works, and my thoughts on it, read on.
Qt AI Assistant is a Qt Creator plugin that adds an AI coding tool directly inside the IDE. It lets you ask questions, generate code, and get help with Qt, C++, or QML without leaving your project.
Instead of searching through documentation or forums, you can just ask the assistant things like how to use a specific Qt class, fix an error, or generate a small piece of code.
Since it runs inside Qt Creator, it fits naturally into the normal Qt development workflow and makes it faster to experiment, learn APIs, and solve problems.
You can treat it as your personal AI assistant that helps you in most boring tasks ;)
Let's start withthe key features of Qt AI assistant:
Built into Qt Creator – works as a plugin directly inside the IDE, so you don’t need external tools
Access to the most popular LLM models – integrates with large language models to generate answers, code, and explanations.
Code generation – can generate widgets, QML components, and other common Qt code
Code explanation – helps you understand existing code, especially useful in larger or unfamiliar projects
Error troubleshooting – explains errors and suggests possible fixes, saving time during debugging.
All right, now that we know what it is, it's time to install and use Qt AI Assistant for the first time.
At the time of writing this article, Qt AI Assistant is only available with a paid Qt licence. Additionally, it can be used with an educational licence for students.
The first step is to open the extensions view in Qt Creator and select the "use external repositories" option. Qt AI assistant is still in development, so this is, let's say, a ‘beta’ version. After updating the list of available extensions, search for Qt AI assistant and install it.
Now that we have the plugin installed, it's time to choose the "brain" for our AI assistant, i.e. the connection to LLM. We have quite a few options to choose from, starting with the most popular models such as:
Code Llama 13B QML (for Qt 6, running in a cloud deployment of your choice)
Code Llama 13B (for Qt 5, running in a cloud deployment of your choice)
Codestral
Claude 4.0 Sonnet
Claude 4.5 Sonnet
GPT 5
DeepSeek V3.2
Code Llama 13B QML through Ollama (running locally on your computer)
Code Llama 7B QML through Ollama (running locally on your computer)
Code Llama 7B through Ollama (running locally on your computer)
This list will certainly grow in the future. For the purposes of the rest of this article, I will use GPT5 because that is what I have access to. In this article, I will not focus on assessing whether a given model is appropriate or not, nor will I compare them with each other. Instead, I will take a comprehensive look at the Qt AI Assistant tool as it is.
To connect our LLm, go to "edit->preferences->AI assistant" and in the "General tab" , select LLM from the list and enter the API key (you can check how to get your API key in your LLM API documentation). I will describe the rest of the options in this blog post later.
All right, it's installed, but what does it actually do? Let's go through the most important features.
The first major feature is automatic code completion. To ensure that this option is enabled, go to the plugin options tab and select ‘Enable automatic code completion’. Alternatively, you can enable and disable it by clicking on the star button on a green background in the top right corner of Qt Creator.
When we are writing code and pause, the AI assistant will suggest code completion. It looks something like this - highlighted code with grey text. We can accept highlighted code with the Tab key or reject it by clicking the backspace key. In the graphic below, you can see how I wrote an empty skeleton for the Buttin object, and the AI suggested highlighted code that I might want to add with a single click. Handy thing!
Of course, it also works with C++ code. If you want to trigger it faster, press Ctrl+' shortcut.
Now it's time for the second feature, which is the ability to bring up a window with a prompt for the model (by default, the key combination Ctrl+Shift+A or by highlightinh code you want to cover and clicking on stars icono in corner)
As you can see, the prompt window is visible, and we can start to request AI to do some stuff.
After the AI model process whole request, we got an answer with a fragment of code that we can copy-pase int our application.
The third most important feature is the so-called smart commands. This is a set of specially prepared commands that help us speed up our work. So we have commands such as:
/doc - Generates documentation for the selected code. It can create function descriptions, parameter explanations, and general comments that help others understand what the code does.
/explain - Explains the selected code in plain language. Useful when working with unfamiliar code or trying to quickly understand complex logic.
/fix - Analyzes the selected code and suggests fixes for errors, bugs, or incorrect patterns. It may also propose cleaner or safer alternatives.
/inlinecomments - Adds inline comments directly inside the code to explain what specific lines or blocks are doing. Helps improve readability and maintainability.
/qtest - Generates unit tests using the Qt Test framework (QTest). It creates test cases for functions or classes, helping you improve test coverage.
/review - Performs a code review of the selected code. It can point out potential bugs, bad practices, performance issues, and suggest improvements.
I mentioned that I would explain the additional options that Qt Ai Assistant allows you to set, so let's go through them one by one. When using LLM, it is important to understand what context is.
Each LLM works a bit like RAM memory – it can store a certain amount of information, but only to a certain extent. Normally, when using Qt AI Assistant, we select the piece of code we are interested in, and that is the context. If we request to ‘correct this’, the assistant will know that we want to correct this particular piece of code.
However, we often work on large projects that contain many related files. The assistant must therefore be familiar with them. To this end, the first option (" Enable context from other files of project" ) allows us to extend the context to other files in the project – when we ask it to fix a bug, for example, the assistant will be able to trace several files and fix the code where necessary.
We can also select the "Use only open files as context" . This is useful when we are working on a very large project with hundreds of files and do not want to ‘overload’ the context memory with unnecessary information.
The next option ( "Use context filtering file .aaignore" ) allows us to add filtering rules using the .aiignore file. It works similarly to .gitignore . If a file/path is listed in it, the assistant will not use it for context. Useful for protecting confidential information.
Lastly, the "Use instruction file agent.md" option allow to use file with instructions and context for the AI Assistant within a specific project. It can define coding guidelines, project conventions, and other information that helps the assistant generate more relevant and accurate responses.
We already know how this tool works from a technical point of view, but what about its quality? Well, first of all, it is best to say that this is a complex issue.
As for the plugin itself, it works fine, but to be honest, I would have expected something more. If you look closely, it's literally a text input where you type a query and it returns a response from LLM. Nothing particularly complicated. Without being overly confident, I think you could write such a plugin yourself quite quickly. And remember that this plugin is only available in the paid version and you should separately pay for the LLM of your choice.
It should also be remembered that the entire tool is only an intermediary between QtCreator and the large language model. The speed and quality of automatic code completion and prompt responses are influenced by the LLM used.
It is impossible to say unequivocally which model is the best. Research and benchmarks are ongoing, and new versions of AI models are emerging so quickly that it is difficult to keep up.
We are slowly coming to the end, so now perhaps the most interesting thing, namely my personal opinion ;)
As far as everyday use is concerned, I must admit that I have used this plugin a little, but I found it lacking in depth. OK, direct integration in QtCreator is a nice addition, but I am hoping for further development of this tool.
It certainly speeds up work and relieves the programmer of tedious tasks (generating documentation, unit tests, and stylistic refactoring) and allows them to focus on more demanding and engaging tasks (planning architecture, working on critical components).
If I had to mention major drawbacks, I would limit myself to three main ones
No conversation window - The prompt built into Qt AI assistant is used to correct code fragments, but once the job is done, we close it. Personally, I would prefer a solution similar to that in Cursor / Kiro / GitHub Copilot / other AI coding tools, where the conversation window is always on the right-hand side, and we have a full view of the conversations, not just one-off messages.
Feature locked behind a paid license - What can I say, in an age when AI is becoming increasingly accessible to everyone (e.g. GitHub Copilot has a free plan), limiting Qt AI Assistant to a paid license only is like shooting yourself in the foot. I think that Qt Company should make this a free feature since most users still need to pay for LLM.
Official hosted LLM created for Qt + QML development - Qt AI assistant requires the use of ready-made models. Although Code LLama 13B QML has been developed, it requires its own hosting, which is impossible for most private users. In order to run LLM at a reasonably acceptable level, you need to purchase extremely expensive equipment, which is not affordable for everyone. I think that if Qt Company hosted its own model, which they would develop on an ongoing basis, there would certainly be people willing to pay for a subscription.
Finally, a very important issue. The biggest problem with LLMs is that they require a lot of computing power to operate. We can either provide the equipment ourselves (which is prohibitively expensive) or use cloud solutions offered by the largest companies.
The downside, however, is data privacy and security. We do not know how the data sent to AI is used or stored. That is why I urge you to use AI with caution. Let it help us, let it write code with us, but let it not see API keys or other secrets ;)
Have you already tested Qt AI Assistant? Feel free to share your thoughts with us. I also invite you to browse through our other articles. See you soon!
We'll address every query and pinpoint the ideal strategy for your project's success.
Fill out the form, and we’ll get back to you shortly.
Chief Executive Officer