<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Ai on Lowell Builds It</title>
        <link>https://lowellbuildsit.com/tags/ai/</link>
        <description>Recent content in Ai on Lowell Builds It</description>
        <generator>Hugo -- gohugo.io</generator>
        <language>en-us</language>
        <managingEditor>lowell@lowellbuildsit.com (Lowell)</managingEditor>
        <webMaster>lowell@lowellbuildsit.com (Lowell)</webMaster>
        <lastBuildDate>Sun, 10 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://lowellbuildsit.com/tags/ai/index.xml" rel="self" type="application/rss+xml" /><item>
        <title>AI and the Software Engineering Landscape</title>
        <link>https://lowellbuildsit.com/posts/ai-used-in-dev/</link>
        <pubDate>Sun, 10 May 2026 00:00:00 +0000</pubDate>
        <author>lowell@lowellbuildsit.com (Lowell)</author>
        <guid>https://lowellbuildsit.com/posts/ai-used-in-dev/</guid>
        <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction
&lt;/h2&gt;&lt;p&gt;AI used-in development is a real thing. I have been testing some development models and using them for both small and large projects.&lt;/p&gt;
&lt;p&gt;I am not talking about Vibe Coding, the practice of trusting code generated by LLMs at face value with little-to-no review of the result. We are talking about writing software code faster than before and introducing new people, both young and old, to software coding capabilities that they otherwise would have never considered. This is a big shift in the paradigm that previously limited those who could develop software to those who really understood programming skills. But it comes at a cost: The possible loss of our foundations of engineering and programming.&lt;/p&gt;
&lt;p&gt;Here&amp;rsquo;s a brief summary of where we are going in this post:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;My setup and how to get something similar&lt;/li&gt;
&lt;li&gt;The power of AI assisted development&lt;/li&gt;
&lt;li&gt;How do we keep good software principles at the forefront&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;setup&#34;&gt;Setup
&lt;/h2&gt;&lt;p&gt;Roo&lt;sup id=&#34;fnref:1&#34;&gt;&lt;a href=&#34;#fn:1&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;1&lt;/a&gt;&lt;/sup&gt; is my current choice for a VSCode extension that allows me to interact with an LLM server.&lt;/p&gt;
&lt;p&gt;The short version of the setup is &amp;ldquo;Install the extension from the marketplace&amp;rdquo;. After that, it gets specific to what you are working with on the LLM side.&lt;/p&gt;
&lt;p&gt;In my case, I have two setups. The first is with an &amp;ldquo;OpenAI Compatible&amp;rdquo; server. The second is utilizing Ollama&lt;sup id=&#34;fnref:2&#34;&gt;&lt;a href=&#34;#fn:2&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;2&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;Both of these were easy to configure. Roo gives you the options to pick what your provider is and includes cloud options, although you know I am opposed to using those for my own needs.&lt;/p&gt;
&lt;h3 id=&#34;installing&#34;&gt;Installing
&lt;/h3&gt;&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Open VSCode&lt;sup id=&#34;fnref:3&#34;&gt;&lt;a href=&#34;#fn:3&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;3&lt;/a&gt;&lt;/sup&gt; or &lt;a class=&#34;link&#34; href=&#34;https://code.visualstudio.com/download&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;install it&lt;/a&gt; if you haven&amp;rsquo;t already&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://lowellbuildsit.com/posts/ai-used-in-dev/vscode-download.png&#34;
	width=&#34;976&#34;
	height=&#34;605&#34;
	srcset=&#34;https://lowellbuildsit.com/posts/ai-used-in-dev/vscode-download_hu_a07d48ead2e47feb.png 480w, https://lowellbuildsit.com/posts/ai-used-in-dev/vscode-download_hu_95f9ef9695f2277f.png 1024w&#34;
	loading=&#34;lazy&#34;
	
		alt=&#34;vscode-download.png&#34;
	
	
		class=&#34;gallery-image&#34; 
		data-flex-grow=&#34;161&#34;
		data-flex-basis=&#34;387px&#34;
	
&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Go to the &amp;ldquo;extensions&amp;rdquo; tab on the primary sidebar (that&amp;rsquo;s the one on the left side)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Search for &amp;ldquo;roo&amp;rdquo;&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://lowellbuildsit.com/posts/ai-used-in-dev/vscode-roo-ext.png&#34;
	width=&#34;350&#34;
	height=&#34;387&#34;
	srcset=&#34;https://lowellbuildsit.com/posts/ai-used-in-dev/vscode-roo-ext_hu_1716bf5b225d54b.png 480w, https://lowellbuildsit.com/posts/ai-used-in-dev/vscode-roo-ext_hu_55e625363bfe946e.png 1024w&#34;
	loading=&#34;lazy&#34;
	
		alt=&#34;vscode-roo-ext.png&#34;
	
	
		class=&#34;gallery-image&#34; 
		data-flex-grow=&#34;90&#34;
		data-flex-basis=&#34;217px&#34;
	
&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click on &amp;ldquo;Install&amp;rdquo; on the &amp;ldquo;Roo Code&amp;rdquo; extension - normally the first one&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://lowellbuildsit.com/posts/ai-used-in-dev/roo-ext-page.png&#34;
	width=&#34;1146&#34;
	height=&#34;630&#34;
	srcset=&#34;https://lowellbuildsit.com/posts/ai-used-in-dev/roo-ext-page_hu_24f8f8d124d4a1ae.png 480w, https://lowellbuildsit.com/posts/ai-used-in-dev/roo-ext-page_hu_876969193854278a.png 1024w&#34;
	loading=&#34;lazy&#34;
	
		alt=&#34;roo-ext-page.png&#34;
	
	
		class=&#34;gallery-image&#34; 
		data-flex-grow=&#34;181&#34;
		data-flex-basis=&#34;436px&#34;
	
&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Optionally, after it installs you will see the icon for Roo show up on the primary sidebar, but I prefer it to be on the secondary sidebar (e.g. the one on the right side). You can get it there by right clicking the Roo icon, selecting &amp;ldquo;Move To&amp;rdquo;, and you should see &amp;ldquo;Secondary Side Bar&amp;rdquo; as an option.&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://lowellbuildsit.com/posts/ai-used-in-dev/move-to-second.png&#34;
	width=&#34;499&#34;
	height=&#34;311&#34;
	srcset=&#34;https://lowellbuildsit.com/posts/ai-used-in-dev/move-to-second_hu_4e1ae667549db01f.png 480w, https://lowellbuildsit.com/posts/ai-used-in-dev/move-to-second_hu_912c82ab9e8fcb7c.png 1024w&#34;
	loading=&#34;lazy&#34;
	
		alt=&#34;move-to-second.png&#34;
	
	
		class=&#34;gallery-image&#34; 
		data-flex-grow=&#34;160&#34;
		data-flex-basis=&#34;385px&#34;
	
&gt;&lt;/p&gt;
&lt;h3 id=&#34;configuration&#34;&gt;Configuration
&lt;/h3&gt;&lt;p&gt;This is where things will likely deviate for each individual. In the initial setup or in the settings of Roo Code, you will find options for connecting to various different providers of Large Language Models (LLMs). If you have a beefy workstation or a recent M-series Mac, you could run everything locally. In that case, you would need to spin up a host server for the LLM, I use Ollama for that.&lt;/p&gt;
&lt;p&gt;I won&amp;rsquo;t go into the details of setting up Ollama because it gets pretty particular in how to prepare your machine depending on the hardware you have available. Suffice to say, follow their docs on how to setup for your specific hardware.&lt;/p&gt;
&lt;p&gt;For Mac users, I have no idea what the &lt;em&gt;best&lt;/em&gt; solution is, but an AI search says Ollama should run great.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s also worth noting, I linked to the docker installation steps because I primarily operate in containerized environments&amp;ndash;even my desktop and laptop but we&amp;rsquo;ll get to that in another post. That is how I have my Ollama server setup. If you aren&amp;rsquo;t running where containers are convenient, then look around at their other install options.&lt;/p&gt;
&lt;p&gt;If you are like me and have a workplace that also sports some AI models, they may serve them up over another protocol. Once again, my work uses a lightweight wrapper that provides an OpenAI compatible API, which allows us to select the &amp;ldquo;OpenAI Compatible&amp;rdquo; option in Roo for the API Provider.&lt;/p&gt;
&lt;p&gt;Once your server is running, you will need to access it and pull the models you want to use from Ollama. So far, Gemma 4&lt;sup id=&#34;fnref:4&#34;&gt;&lt;a href=&#34;#fn:4&#34; class=&#34;footnote-ref&#34; role=&#34;doc-noteref&#34;&gt;4&lt;/a&gt;&lt;/sup&gt; has been rock solid for me. With its more up to date data set as well as a robust set of features, it delivers fast and reliable code on my RTX 3090. But it peaks at about 23GB of VRAM usage on my RTX 3090 and that&amp;rsquo;s with the context window capped at 128,000 tokens. Technically it could do more, but I worry about pushing the limits of my GPU and falling back to CPU is slow. The only other model I have used with decent success is the devstral-small-2 24 billion model. Explore what options are out there on the Ollama site.&lt;/p&gt;
&lt;h3 id=&#34;other-things&#34;&gt;Other Things
&lt;/h3&gt;&lt;p&gt;There&amp;rsquo;s so much to these tools and they are always evolving. Roo supports features like checkpoints, auto-approval of AI commands, skills, modes, terminal interaction, context customizations, repository indexing, and more. I can&amp;rsquo;t possibly cover them all and that is not my intent. I primarily want to get you an understanding of how my setup works and how you could do something similar.&lt;/p&gt;
&lt;h2 id=&#34;how-good-is-it&#34;&gt;How Good is it?
&lt;/h2&gt;&lt;p&gt;I have to admit, the first couple times I used AI to write code were rough. I struggled to get the results I wanted and realized there is a great deal of learning for prompt engineering. It turns out simply asking &amp;ldquo;Can you write this feature?&amp;rdquo; is enough to get the AI to do something but not always what you want. Those types of prompts often resulted in the AI getting lost or stuck in the middle of processing.&lt;/p&gt;
&lt;p&gt;If you want better results, it helps to craft your prompts more specifically and even try out the other modes for more complex objectives. For example, I had the AI write a tool for parsing audio for certain terms. I chose to not write any code myself and instead use Gemma 4 to write the code, tests, and Dockerfile. Unsurprisingly, it started writing code in a few seconds and made a pretty robust solution that got me almost the whole way there.&lt;/p&gt;
&lt;p&gt;We will get back to that in a moment. To get there, I didn&amp;rsquo;t just ask &amp;ldquo;can you write code to parse dirty words out of a video?&amp;rdquo; Rather, I switched to Ask mode in order to begin discussing the possibility of how to write such a tool and asked various questions about whether or not I could achieve the end goal of notifying a user of the existence of vulgar terms in a video clip purely by using local only models. Roo said this was all possible.&lt;/p&gt;
&lt;p&gt;After that, I switched the mode to Architect and had Roo write out a plan for how to achieve the proposed concept with Python. I also spent some time asking if other languages like Golang or NodeJS made sense and the discourse was thorough and accurate. While both of those languages would have been great for a variety of use cases, the fact remains that Python has the most robust AI integrations and faster-whisper is the go-to for Speech-to-Text.&lt;/p&gt;
&lt;p&gt;With all that discussed, I switched to Code mode, told the AI to write down the whole plan it had architected with my prompts, and begin the implementation. Quick as lightning, the code spun out. Lint errors and mistakes showed up for moments and then were corrected seconds later. The AI had correctly written the code, added useful logging for diagnosing problems, fixed lint errors, and wrote unit tests.&lt;/p&gt;
&lt;p&gt;I started the tool with a movie and gave it some specific words to find. It used &lt;code&gt;ffmpeg&lt;/code&gt; to parse audio out of the video and began &lt;code&gt;faster-whisper&lt;/code&gt; to transcribe. That is where I ran into issues. I am actively diagnosing the issue and next steps include prompting the model to add some verbosity and help me extract the details necessary to fix the issue.&lt;/p&gt;
&lt;p&gt;In short, it&amp;rsquo;s good at writing code. Pick any language, syntax, linter, and you will notice the model rip through prompts with decent accuracy.&lt;/p&gt;
&lt;h2 id=&#34;the-future-landscape-for-us-engineers&#34;&gt;The Future Landscape for Us Engineers
&lt;/h2&gt;&lt;p&gt;Here&amp;rsquo;s the deal: AI can write code, it does not engineer.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m sure it can do some design and process through some higher level engineering stuff without issue. But when it comes down to it, you still need to tell the LLM what to do and what your goal is. Ultimately, we still need to be in front of the models, determining what architecture we need for our situation.&lt;/p&gt;
&lt;p&gt;Is it possible that the future has some lack of humans determining what the AI should do? I suppose it is possible. But I think what is likely happening now and going to keep happening, we are going to see a surge of developers entering the field without traditional foundations.&lt;/p&gt;
&lt;p&gt;I have seen this before and I&amp;rsquo;m sure it will happen again: A new developer comes along with a lot of good looking work and has the words to match. But when push comes to shove, the work produced is clearly lacking in some basic ways that indicate to me it was written by an AI without much review. A good software engineer would have caught the subtle issues and corrected them.&lt;/p&gt;
&lt;p&gt;These developers may have great promise in the future, but if they are not trained and guided on basic principles and fundamentals of design and engineering, they will likely prompt the AI models to make incorrect choices and write flawed or problematic code. Worst yet, they might not be able to offer real code reviews for others, which is a critical function of engineers and one that we haven&amp;rsquo;t found a way to replace with AI yet.&lt;/p&gt;
&lt;p&gt;With that, I am making an effort as a more senior engineer at my work to begin mentoring those coming after me. I would like to endeavor to cover some of those principles and ideas in future posts to help anyone looking at this blog.&lt;/p&gt;
&lt;h2 id=&#34;links&#34;&gt;Links
&lt;/h2&gt;&lt;p&gt;References for your convenience&lt;/p&gt;
&lt;div class=&#34;footnotes&#34; role=&#34;doc-endnotes&#34;&gt;
&lt;hr&gt;
&lt;ol&gt;
&lt;li id=&#34;fn:1&#34;&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://docs.roocode.com/getting-started/installing&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://docs.roocode.com/getting-started/installing&lt;/a&gt;&amp;#160;&lt;a href=&#34;#fnref:1&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:2&#34;&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://docs.ollama.com/docker&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://docs.ollama.com/docker&lt;/a&gt;&amp;#160;&lt;a href=&#34;#fnref:2&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:3&#34;&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://code.visualstudio.com&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://code.visualstudio.com&lt;/a&gt;&amp;#160;&lt;a href=&#34;#fnref:3&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id=&#34;fn:4&#34;&gt;
&lt;p&gt;&lt;a class=&#34;link&#34; href=&#34;https://ollama.com/library/gemma4&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;https://ollama.com/library/gemma4&lt;/a&gt;&amp;#160;&lt;a href=&#34;#fnref:4&#34; class=&#34;footnote-backref&#34; role=&#34;doc-backlink&#34;&gt;&amp;#x21a9;&amp;#xfe0e;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</description>
        </item>
        
    </channel>
</rss>
