Balancing Agentic AI with Traditional Engineering
Agentic AI acts on its own, making decisions and pursuing goals over a period of time. This also applies to coding agents.
When incorporating agentic AI into the development flow, it’s crucial to maintain control by applying traditional software engineering principles. Without this, you risk introducing unnecessary complexity that can undo the benefits of initial quick progress achieved through AI. Such complexity can quickly overwhelm your efforts, regardless of the amount of AI resources or time invested. Additionally, having control over dependencies and the structure of the results is essential. To avoid getting stuck later, it’s important to be mindful of these factors.
The code agent workflow can be more effectively demonstrated through a screencast rather than explained solely through words. So let’s use Junie, JetBrains’ new coding agent, which was officially launched in mid-April, as an example. In just a few minutes, you can see how Junie achieves goals autonomously and handles iterations. It has been sped up for the sake of brevity, but the viewer is encouraged to pause at interesting points and read the output of the tool.
Screencast
A few considerations: For any IDE vendor, it’s crucial to have something to offer in this regard; otherwise, developers and companies may switch. The competition is intense, as evidenced by the fact that the companies behind Cursor and Windsurf have already raised hundreds of millions from investors, and TechCrunch reports that Windsurf is in talks to be acquired by OpenAI for a sum of $3 billion.
JetBrains Junie vs JetBrains AI Assistent
But how does that relate to JetBrains AI assistent that they have already quite some time.
While there is some overlap between AI assistant and Junie`s code agent workflows, they serve distinct purposes. For instance, the AI assistant lets you select a piece of code to request an explanation or make specific changes with limited context. In contrast, a code agent can create a detailed plan involving complex subtasks that are autonomously applied and refined through iterations. From a user interface standpoint, both functionalities could for sure coexist seamlessly in the same UI, just take Windsurf as an example.
JetBrains Junie vs Other Tools
Under the hood, Junie utilizes cloud-based large language models (LLM). While other tools allow users to select their LLM provider, Junie does not currently offer this option. However, for tasks like the CSV visualizer demonstrated, many developers might opt for Claude 3.7 Sonnet in other tools as well since Claude excels at such tasks. Nevertheless, providing this flexibility could be beneficial, for instance, if a company’s policy restricts LLM usage to specific managed services like AWS Bedrock. As someone who prefers running AI locally for reasons like privacy and control, I believe achieving an agentic flow like the one shown isn’t possible yet with local AI setups. However, I hope this may change in the future.
BTW: If you use the same approach as demonstrated in the screencast with Claude 3.7 Sonnet selected in Windsurf or another tool, you will naturally achieve a similar result, as that part of the functionality is powered by the cloud LLM.
Quota
Depending on the complexity of the tasks (speak number of tokens), this can quickly eat up your quota, similar to how credits work in other tools. To check your remaining quota, you need to navigate to the Junie tab, right-click on the title bar, and select License Info. This process doesn’t feel intuitive for accessing such crucial information about a limited resource.
Deterministic Local Tools
All the AI hype aside, many practical tasks remain more reliable and efficient when carried out using traditional, deterministic local tools. For example, code analysis and refactorings are an area where JetBrains is traditionally very strong, whereas with LLMs, you still have to manually review any changes they make.
Considerations
Finally, these considerations still apply: when using cloud-based LLMs, your code is transferred to a cloud provider, which may not always be permissible depending on what you’re working on or for whom, due to legal restrictions or company policies. Local LLMs could serve as an alternative for some tasks; although they are not as capable, they often remain a bit underestimated.
Smart Combined Approach
Developers also need to keep control over complexity, technical debt, dependencies, privacy concerns, and other critical aspects of their work. Also from a cost and resource perspective, a smart approach combining cloud LLMs, smaller local models for simpler tasks, and traditional tools where appropriate remains not only valid but is more efficient in the mid- to long term.