For enterprise organizations, the last year has made one thing clear: implementing AI is not just about choosing a model. It is about building the right architecture, governance, security, and orchestration around rapidly evolving technology. As platforms mature and new frameworks emerge, teams are being forced to adapt in real time. This past year offered a firsthand look at what that actually means in practice.
Implementations of AI, a Year in Review
Being a new developer in the industry during 2025 has been quite a challenge, especially with AI making many junior-level development tasks easier than they have been in years. However, entering the workforce during the dawn of AI also comes with the benefit of learning the technology as it changes.
Every month, a new model is released that is stronger than the last, and as that happens, strategies and implementations of AI are changing at mach-5 speed, sometimes even on a weekly basis. The creation of MCP, agent orchestration, security practices, governance models, and frameworks surrounding AI are all being rapidly developed to catch up with improvements in model performance and accuracy.
It is an especially useful time to enter the workforce because most junior developers are doing the bulk of their hands-on learning right now. I have been lucky enough to work with the latest AI development technology, and it has improved my confidence in working with and leveraging AI in a world where it is becoming incredibly prevalent. Now I have experience not only with AI-assisted software engineering, but also with AI as a solution itself.
However, the destination is not what is interesting. It is the process of getting there, and the adaptability required to find the correct solution. This journey starts with Copilot Studio, the Microsoft no-code solution and the first platform our team attempted to use to create an AI-centered product.
Copilot Studio
In the beginning, the other intern, Ethan, and I started from scratch. We were given a project description, a platform, and a project manager to guide us along.
The project itself seemed simple at first glance: an AI chat assistant designed to improve the institutional memory of RBA as an organization. The chatbot would guide the user through a conversation about their workday, remember the contents of that conversation, and store them away for later use. Then, upon request, the agent would retrieve the information from those conversations and ideally provide better visibility into employee work and capabilities that are often uncaptured by other systems, such as resumes.
That idea came with several challenges. Where would the information be stored? What would Microsoft allow us to interface with? How would we split retrieval and logging? Most importantly, we did not know how to use the platform, and no one else did either.
The first couple of weeks were nothing but research and experimentation, so we decided the best way forward was to divide and conquer. Ethan implemented his version of the chatbot with a more traditional topic trigger and Power Automate flow-based approach, while I worked on a chatbot that used AI to interpret inputs.
Both had their benefits and downsides, but since the technology was so new, there were plenty of bugs to be found in the AI input method. As a result, we leaned toward the more traditional flow, while still using RAG and AI tools to extract data and interpret it.
During this time, Ethan and I met several times a day to stay aligned and share findings. As we were developing the product, every week brought new AI news: the introduction of multi-agent orchestration, new Power Automate tools, MCP connectivity, and Microsoft Foundry agents. It was hard to keep up with it all, but as these new technologies emerged, we adapted and implemented the changes.
By the time our internship was over, we had a working chatbot in Microsoft Teams where users could log their work, fetch the contents of their work, and use several tools to summarize and present what they had completed. As I was working through this project, I also logged all of the work I completed in order to use it. To show how far our agent had come, I would like to share how the AI summarized this project for me:
How did this project improve my proficiency with AI?
Diverse Task Engagement
You worked on bug fixing, data population, security enhancement, and demo preparation, all directly tied to AI project goals and technologies.
Technical Challenge Resolution
Facing and resolving technical issues deepened your troubleshooting skills and practical understanding of AI systems.
Hands-On Experience with AI Tools
You learned and applied multiple AI-related tools, including Copilot Agent, n8n, AI Builder, Dataverse, Power Automate, Power Apps, SharePoint, Azure Portal, and Swagger.
Achievements and Outcomes
Your work led to improved agent functionality, enhanced security, successful integrations, and knowledge sharing within the team.
Growth Through Practice and Collaboration
Your proficiency grew through practical experience, problem-solving, and collaboration, as evidenced by your logged hours and outcomes.
All points are based strictly on your RBA Work Logs, with no assumptions beyond the recorded data.
The most interesting part of all of this was just how easy it was to get set up initially. There was little to no learning curve, and we had it minimally functional within weeks.
However, the real challenge was going any further than that. Complex, enterprise-level solutions require more than fast setup. They require flexibility, control, and the ability to scale. Copilot Studio was not going to cut it for us, so we decided to experiment with a new candidate.
n8n
After working on the Copilot Studio project, we continued working with RBA on a different project that was mostly unrelated to AI. During that time, though, I did some experimentation with the n8n platform, which gave me hope for a better enterprise AI solution.
The real benefit of n8n was using technology that was agnostic to the Microsoft platform. n8n allowed me to experiment with Supabase as a database and authentication solution, and React to display everything cleanly. This also meant I could secure each log to the user who created it, which we had not been able to do successfully in Copilot Studio.
That way, when someone submits a work log for a client that no one else should have access to, they can do so without worrying. This was a breakthrough, especially for AI agent platforms that often had limited security measures available to developers. Trusting AI to make secure calls and fetch the correct data is a security nightmare, so being able to ensure the AI only had access to secure tools that use industry-standard practices was a real relief.
In the process of working on this solution, I once again logged my work to test it out, so here is what n8n thought of my progress:
Here’s a focused summary of how Project 1234 has improved your AI proficiency, based strictly on AI-related work:
You strengthened your skills in conversational AI system design by improving natural language interactions in the n8n chatbot. Automating name and time detection pushed you deeper into context-aware AI flows rather than static responses.
You worked directly with vector data and embeddings, including manually replacing embeddings with accurate data. That’s hands-on experience with how retrieval-based AI systems function under the hood.
You also developed experience in AI parameter tuning by implementing sliders to adjust system behavior. That suggests growing comfort with optimizing model outputs and experimenting with AI performance variables.
Additionally, enabling role-based access to vector data improved your understanding of how AI systems are governed, secured, and structured in production environments.
Overall, this project moved you from basic integration toward fine-tuning, contextual control, and AI workflow orchestration. Would you say your biggest learning curve was around embeddings, authentication with AI data, or tuning conversational behavior?
I would say this was quite an upgrade from the initial Copilot Studio solution, and on top of that, the data was secured.
In the end, n8n provided an extremely affordable middle-of-the-line solution that combined no-code with code-first thinking and gave me a taste of what an enterprise-level solution could look like. However, it still was not quite enough to satisfy our needs, and the n8n project was short-lived as our other project took priority for the next few months.
The AI Solution as It Stands Today
The final piece of the puzzle is one that still has room to change.
For our latest iteration, we have gone pro-code with our approach, preferring the customizability of existing frameworks such as Microsoft Projects SDK and Microsoft Agent Framework. In this approach, we decided to split our solution into several pieces. Cosmos DB would store all persisted data, Foundry would house the agents we use to make requests, C#/.NET would serve as the access point for all interactions, and a React frontend would provide the user experience.
We went even further by deciding to orchestrate our agents through a central orchestrator agent that routes requests to agents that are experts in their respective topics. This way, we do not bog down the system prompts of any one agent and can delegate certain tasks to the agents best suited to handle them.
This is not even the end of the story. We developed this orchestration because these technologies are still actively in development, and the Agent Framework at the time did not have orchestration fully compatible with Foundry yet. As we continue to develop this product, we will keep iterating and adapting, improving each time and applying better techniques as the platform grows.
The Real Lesson: Adaptability Matters More Than Certainty
If you are wondering what this current AI thinks about my work, I unfortunately do not have an answer for you, but for a good reason. We had to take a break and are now creating real-world, enterprise-grade solutions.
There is still so much left to do with this technology, and it moves incredibly fast. But the most important thing to do is remain adaptable, especially during a time when next week could change what you did this week and the week before.
Investing in learning and keeping up with the curve is what we do best as software engineers, and AI is no exception. Our work will continue to be enterprise-grade and built for the future.
For organizations trying to move from experimentation to scalable implementation, this is where the real work begins. At RBA, that means helping teams navigate the gap between AI hype and enterprise-ready execution, from architecture and orchestration to governance, security, and adoption.
About the Author
Lucas Moen
Intern - Software Engineering & Data Team
Lucas Moen is a Software Engineering Intern at RBA and a lifelong learner by heart. He has experience with full-stack development and takes pride in creating clean, secure, future-proof software. He leverages frameworks like React/React Native to curate visual experiences and C#/.NET to supply data with a clean architecture to support it.
Living in the beautiful city of Duluth, Lucas spends his free time camping, crafting and working on various side projects.