When working on the next version of ObjectiveFrame with improved scripting features, I got the idea to explore how well an AI agent would create a mechanical structure using the built-in scripting language (ChaiScript) in ObjectiveFrame. I had some initial sessions with Claude which confirmed that it was possible to create scripts from textual prompts and run them in the ObjectiveFrame's new scripting environment. This led me to my next idea of adding an interface to an AI agent directly in the ObjectiveFrame application. The AI agent would then be able to create mechanical structures based on the user's prompt. In the following sections, I will describe my process to implement this feature.
As I have mentioned in my previous post, I have been working on my graphics library Ivf++ 2.0. There are many features I would like to implement, but as this is a hobby project I have limited time to work on it. My common approach has been to select a feature to implement, do research online, and then start coding. You start with great energy and enthusiasm, but you often get stuck on small things as you progress. Why doesn't this approach work? How does this function work? Why is this not rendering correctly? These are some of the questions that come up. You can spend hours figuring out the problem; sometimes, you can't find the solution. This can be very frustrating and demotivating.
Some time ago, I started exploring AI tools. In 2022, I started experimenting with ChatGPT, first by letting it suggest improvements to existing codes in Python. I was amazed at the improvements it suggested. I also used it to answer questions on different programming topics. One thing that impressed me was the ability of the language model to translate numerical code written in Python and Numpy to C++ and Eigen. Worked almost perfectly. Later, during 2023, I used it in the refactoring of ObjectiveFrame. It helped me to refactor the codebase and make it more readable.
Another AI tool that I started using as an early adopter was Github Copilot from Microsoft. It is a code completion that uses a language model to provide the completions. For me, it has made it fun to code again. It's almost like having a pair of programmers who know everything and help out with the boring parts of coding. By combining classical chat-based AI with Copilot, I have implemented features in Ivf++ 2.0 at a pace that I would not have been able to do otherwise. It feels like coding at the speed of light. In the rest of this post, I will describe some examples of how I have used these tools in the development of Ivf++ 2.0.
However, my needs for 3d programming have not been for games but for interactive 3d applications mainly in the field of engineering. I have used OpenGL for many years. For my PhD I developed a 3D Scene Graph library, Ivf++, which was a wrapper around OpenGL. It contained a set of nodes for implementing interactive 3D applications, such as ObjectiveFrame a 3D beam analysis application focusing on real-time interaction.
During the last decade OpenGL has evolved and the fixed function pipeline has been deprecated. Modern OpenGL is based on shaders and the programmable pipeline. This has made OpenGL more powerful, but with the cost of complexity and ease of use. This article is about my journey implementing a new 3D graphics library for C++ that is easy to use and at the same time flexible. Ultimately I want to get back to ease of use of the fixed function pipeline but with the power of modern OpenGL.