At CYFRON SOFTWARE TRADING, we often explore ways to help teams move faster without sacrificing clarity or aesthetic quality. A recent internal experiment confirmed a quietly growing belief in our product group: web apps powered by AI don’t have to be slow, costly, or code-heavy during their prototyping stage.
Using only Figma Design and Figma Make, we successfully built and tested a compact mobile web app — right down to running it on a real phone. The concept was simple but practical: an app that lets users snap a picture of a nutrition label (in this case, a Cheezits package) and uses AI to extract macronutrients like calories, protein, carbs, and fats. Then, using Gemini 2.5 Flash (Google’s latest generation AI model), the app displays a clean nutrition summary — along with a tailored health insight. All within a browser, without custom code or backend integration.
What made this workflow stand out wasn’t just the speed. It was the precision. Design choices such as the 360×600 px mobile frame, pastel #E9F2D background, and banana-themed icons made it easy to communicate the app’s friendly, food-focused intent. We'd typically let these elements evolve over time, but being able to deploy them directly into a working prototype allowed for real-world visual validation within minutes.
The layout relied on Figma’s auto-layout system to make resizing easier when previewed on different devices. Once designs were imported into Figma Make, we connected it to the Gemini API — no server setup required. Camera functionality and data animation (0.3s delay between macronutrient results) were added for a more responsive experience.
A few UI bugs required hands-on fixes. One involved setting the container’s height to 100% to avoid rendering issues — a reminder that even rapid tools require thoughtful layout control. But once sorted, the app flowed smoothly: from “Add Food” to camera access, AI analysis, and final results.
The numbers told their own story. From the real Cheezits label, the app correctly identified calories (210), and accurately extracted protein and carb values. The AI also returned brief, natural-language assessments, such as whether the food might suit an athletic or sedentary lifestyle.
This kind of rapid prototyping — fully visual, integrated with AI, and testable on mobile — offers product and design teams a powerful playground to validate ideas before investing in full-stack development. We see strong parallels with our core values: usability built into the process, innovation supported by meaningful tooling, and design ergonomics that scale.
Figma Make, combined with evolving AI platforms like Gemini, creates new possibilities not just for designers but for cross-functional teams. If you're thinking about how to map AI into your UX early, we encourage giving this method a try.
At CYFRON, we believe thoughtful design brings clarity. Tools like these are how we test that belief in action.
Using only Figma Design and Figma Make, we successfully built and tested a compact mobile web app — right down to running it on a real phone. The concept was simple but practical: an app that lets users snap a picture of a nutrition label (in this case, a Cheezits package) and uses AI to extract macronutrients like calories, protein, carbs, and fats. Then, using Gemini 2.5 Flash (Google’s latest generation AI model), the app displays a clean nutrition summary — along with a tailored health insight. All within a browser, without custom code or backend integration.
What made this workflow stand out wasn’t just the speed. It was the precision. Design choices such as the 360×600 px mobile frame, pastel #E9F2D background, and banana-themed icons made it easy to communicate the app’s friendly, food-focused intent. We'd typically let these elements evolve over time, but being able to deploy them directly into a working prototype allowed for real-world visual validation within minutes.
The layout relied on Figma’s auto-layout system to make resizing easier when previewed on different devices. Once designs were imported into Figma Make, we connected it to the Gemini API — no server setup required. Camera functionality and data animation (0.3s delay between macronutrient results) were added for a more responsive experience.
A few UI bugs required hands-on fixes. One involved setting the container’s height to 100% to avoid rendering issues — a reminder that even rapid tools require thoughtful layout control. But once sorted, the app flowed smoothly: from “Add Food” to camera access, AI analysis, and final results.
The numbers told their own story. From the real Cheezits label, the app correctly identified calories (210), and accurately extracted protein and carb values. The AI also returned brief, natural-language assessments, such as whether the food might suit an athletic or sedentary lifestyle.
This kind of rapid prototyping — fully visual, integrated with AI, and testable on mobile — offers product and design teams a powerful playground to validate ideas before investing in full-stack development. We see strong parallels with our core values: usability built into the process, innovation supported by meaningful tooling, and design ergonomics that scale.
Figma Make, combined with evolving AI platforms like Gemini, creates new possibilities not just for designers but for cross-functional teams. If you're thinking about how to map AI into your UX early, we encourage giving this method a try.
At CYFRON, we believe thoughtful design brings clarity. Tools like these are how we test that belief in action.