GenAI Documentation
I took the lead in driving the enhancement of Vectice's documentation experience, specifically integrating GenAI into the current text editor.

I drove discussions with engineering to refine and iterate the designs. In parallel, I worked with the PM and the QA Team to thoroughly test and to refine the experience. The success of this project in a tight timeline allowed Vectice to secure a Banking Saas deal.
ROLES
UX/UI Designer, Interaction Designer
TOOLS
Figma
DURATION
3 weeks
01
Defining the goals
Help data scientists to auto-summarize their complex projects
Vectice’s generative documentation aims to allow users to easily summarize their work in Vectice, eliminating the need to manually type summaries. Data scientists' work range from creating datasets and training models to iteratively improving models. Metadata captured from this activity is used to generate the documentation. In general, data scientists find documenting their work to be very cumbersome and time-consuming.

Streamline text editing: "Ask AI anything"
To help users write about their initiatives, we also incorporated standard editing and review features like spell check, grammar correction, text shortening, and language simplification. These features can be found in leading generative documentation apps like Notion.
02
Challenges
Short timeline
Given the tight timeline, I devised simpler and more easily implementable engineering designs. I worked closely with engineers to align on designs and used components already in the component library. While the design's scope had to be scaled down, I made sure that the overall user experience was not significantly compromised.

Limited studies about the UX of generative documentation
This is a relatively new space that does not have many user experience studies about it yet. Most articles are about guiding users to write the best prompts, but do not focus on the interface. I did a deep dive to test out different generative models. This involved installing programs like Stable Diffusion and even joining Discord channels for Midjourney and DALL·E where I personally tried them out.
03
Researching GenAI
Analyzing the best practices by leading GenAI documentation apps
I conducted thorough testing on generative text editors from competitors, including Notion, Clickup, SimplifiedAI, and FigmaAI. I primarily focused on each app's user control, the AI's flexibility, the communication of limitations, measures to prevent user errors, and how well the app guides users to subsequential workflows. Additionally, I closely observed the behaviors of components when I interacted with them, recorded them down, and considered why that interaction happened. Below is a snapshot of how I organized the insights I gathered.



One specific insight was that all applications demonstrated strength in allowing users to control where the generated text is inserted — either replacing or inserting below an existing text — and the ability to regenerate results. Notion excelled in effectively communicating the potential misleading nature of generated text. Below is a screenshot of ClickupAI vs. Notion AI. Notion openly discloses "AI responses can be inaccurate or misleading." Additionally, both feature similar options, but ClickupAI has a convenient option to edit inputs.

04
Initial Designs
Initial wireframes
I developed sketches and screen designs for various workflows, illustrating sequential screens to showcase the flow and minor interactions and states such as when the mouse is hovering or clicking, when the components are loading, and when components have a great/very little amount data. The snapshot below provides an overview of all screens.


Gathering feedback on designs
I led discussions with Engineering to listen to their feedback and to explain interactions. A specific example was that the Frontend Lead pointed out implementing the floating "Ask AI" component within the due date would be challenging. Consequently, we decided to go for a more engineer-friendly option to place the "Ask AI" component inside the editor, a modification that minimally impacts the overall user experience, but saves a great amount of time for engineers.



Designing for edge cases and errors
Given the instability of the OpenAI API, I ensured that if an error occurs, the textbox communicates this to the user in real-time. Additionally, it explicitly indicates whether the error originates from Vectice or the Language Model, empowering the user with clear information on how to address the issue.


Additionally, it's important to disclose the limitations of the feature. We communicate to the user that the "Ask me anything" does not have access to internal project data.



05
Testing & Iterating Designs
Issue 1: The quality of the generated text varied
After the first implementation, the team tested the quality of the prompt and results, utilizing real-world data science examples. Even after refining the prompts, however, there was still quite a range of variation in the summarization results.

To address this, we brainstormed a solution where Vectice generates four results simultaneously, allowing the user to choose the one that best suits their needs. This approach is inspired by platforms like Midjourney and other image generation tools. The visual below illustrates the changes made. The user can switch through the tabs to view the results and then insert one.



Issue 2: Users found it difficult to continuously experiment and change the prompts.
After further testing, we identified challenges in continuous experimentation and prompt refinement. There was a block in the workflow. Users have to click back and forth to view the prompt vs. results while losing context of the result.

To address this, I researched other generative software like Stable Diffusion, ContentBot, Simplified, and Midjourney. It became evident that these platforms often place prompts and inputs on the same page as the generated results, facilitating seamless navigation between these crucial steps for the user.





Adjusting the design
Subsequently, I refined the design by situating the action button in close proximity to the relevant section. Both 'Update prompt' and 'Regenerate results' are now positioned adjacent to the prompt, ensuring that the user can effortlessly discern the next action. Additionally, we decided to eliminate prompt details such as 'update date' and 'creator,' recognizing that users typically do not find these aspects crucial. This adjustment enhances the user experience, allowing for smoother and continuous refinement of the prompt alongside the generated text.


06
Final Product
Video of Generative Documentation Feature
One of the objectives of this project was to assist data scientists in auto-documenting their complex projects. This new enhancement allows users to take advantage of GenAI to summarize their intricate work done with APIs and integrations. The following illustrates the workflow of auto-documenting an iteration using a pre-saved prompt.



Video of Ask AI Feature
The 2nd objective of this project was to streamline editing and writing documentation. The AI provides the flexibility for the user to generate any text, while conveying its limitations and offering simple editing functions.

07
Reflection
This project was truly exciting and provided me with the opportunity to delve into and experiment with cutting-edge technology, ultimately bringing it to Vectice. I learned a lot from Christian, the former Head of Product of GenAI at Meta, who shared profound knowledge on how software systems connect and utilize LLMs to generate specialized content. It was very rewarding to see the success of many teammates working together. Every teams' feedback and knowledge - from Sales to QA, Growth, Product, and Engineering teams - led to a better end product.

While the experience was enriching, I wish I had more time to further research various applications leveraging GenAI to allow me to make even more informed design decisions. Additionally, expanding the scope of the design would have allowed me to define more intricate interactions and animations to create a truly exceptional user experience.

Furthermore, as more customers use these features, I would be very curious to hear more of their feedback to further refine these designs.

While testing out GenAI apps, I explored how to use and write prompts in Stable Diffusion with the Deforum extension to create abstact AI generated visuals. Feel free to take a look! Watch AI Generated Visuals