
[Prototype] Frontend Component Builder Agent (Early 2024)
In early 2024, I heard from a friend that building frontend components was tedious and annoying, so I built a frontend component builder agent powered by GPT-4 that took multiple steps to build the frontend component based on a prompt. The component was intractable while it was being built, and you could see and edit the code + see the upcoming steps that the agent had planned and how far it had progressed.

This AI Coding App Turns Prompts into Fully Functional, Custom React Components!
This agent was built over a span of 2 weeks in February, 2024 (prior to Devin, Lovable, Bolt, etc.).
The code is open-sourced at https://github.com/christopherhwood/react-component-agent.
It would process user prompts and turn them into complex React components that used TailwindCSS for styling.
It was designed to handle especially difficult to build components like those that use contenteditable and require special ways of working with the dom in order to retain user content and the caret position.
It also had special instructions to build modular code that used React best practices for frontend state management and avoided large, runaway components. This was to address concerns that LLM generated code is sloppy and hard to adjust or customize later.
Finally, since the generation process was slow, I instructed the agent to always keep the code runnable so that the user can play with a live demo version of what had been built so far as the agent worked on the next steps in the background.
Technical Innovation
Technically the agent was relatively cutting-edge in running multiple LLM requests at once for the foundation level code and then picking the best resulting code to use. This was because I identified the foundation code as most critical - if the agent got off on a bad foot then everything that came later was basically worthless.
To pick the best code, I first checked that the code was output in a json object as expected, and then asked the LLM to select the best code. This is now a generally accepted best practice for doing evals as well.
Use Case
I used components generated by the agent for the landing page and for some of the frontend features of the application itself.
Fun Note
I used ElevenLabs to generate the voiceover in the video recording :)