Monoface™ © 2022–2026

Independent Interface Design Team.

Where Discipline & Intuition Merge.

(The founder’s pain)

Every founder knows this feeling:

You build a product and put everything you have into it, excited about day X when users are happy and investors applaud.

After months of iterations, you’re done with the designs, and the engineering team begins implementation. Everything feels promising: the process is strictly on plan.

And weeks later... Reality sets in.

You discover that the mockup and the running app aren’t the same product. Yes, technically they are, but thousands of small details make the product feel far from how it was imagined in static Figma frames.

The magic of design, full of dopamine and novelty, now turns into weeks of meetings and rework, weighing heavily on your shoulders.

This pattern repeats, and each attempt to fix the problem just shifts it a bit with no real impact. As a result, the team’s motivation drops, time runs out, and the runway burns.

“We imagined it differently.”

It’s a painfully familiar situation we’ve accepted as the nature of the work for decades.

Not anymore.

(What actually happens)

In most teams, design is validated only after development ends. Staging is the first time anyone sees the interface running in conditions close to production. The timeline is predictable: weeks in design, weeks in engineering. When the full cycle ends, the first real user session reveals the first significant inconsistencies. The team now has to ship a version that doesn’t quite work, or spend another full cycle adjusting to reach the version that should have been tested at the start.

Sometimes it’s manageable.
Sometimes, a product tragedy.

This is where the production machine starts cutting corners and accepting compromises on how the live product looks and feels, which is especially painful for early startups: the smaller the product, the greater the compromise.

And even then, it doesn’t solve the problem substantially. The main pain points remain:

The development process still exists within the design cycle, which dramatically extends the timeline.

The engineering team commits code before anyone knows whether the design actually solves the problem. The functionality exists on paper, but its real value doesn’t until the feature runs in production.

As a result, design validation failure becomes a recurring risk. The options are narrow. The best case: another expensive development cycle to adjust the experience.

The worst: entire features get deprecated, which means resources and time are burned.

The cycle repeats until the team runs out of either patience or budget.

(The wall between the teams)

Development and design exist inside the same large cycle, but they live in different environments. A container built with auto-layout in Figma won’t behave the same way inside a Flex component that inherits constraints from three parents up. The gap between the two is structural, not communicative.

Most of the friction that appears late in development exists in this translation. A design file can’t verify how a component will behave inside the actual product. Changes that emerge during development have no clean way back into the original design. Every round of iteration loses something in the crossing. Individual drifts are invisible in isolation. But they accumulate into something users feel without being able to name it.

(What changed.)

In the last two years, many things have changed.

AI agents help turn a design into working HTML and CSS in days instead of weeks.

The same agents operate inside the actual codebase. They read the existing context and produce code that integrates directly into the product’s repository, matching the structure already there.

Of course, none of this turns a designer into a developer. What AI does is expand what a designer can verify on their own before engineering commits time to a feature. It frees the developer from the stream of late-stage adjustments, leaving their hours for the problems only engineering can solve.

This shift made a different workflow possible, and we built ours around it.

(How the work runs)

At Monoface™, we restructured our design processes to impact the whole production cycle.

Instead of a static mockup, the design team ships a running interface. Components arrive with props defined, states wired, and interactions behaving exactly as they will behave in production. Developers pull the branch and integrate the components directly, running them locally at the start of their sprint.

Decision-making speed

The pipeline produces something usable every one to five weeks, depending on the phase:

UX Simulation (5–10 days).

Before the product has a visual language of its own, the fastest way to test an idea is on a brand-agnostic UI.

Monoface’s component set plays the role of a placeholder so the interaction can be built in the real runtime environment, web or native, without investing in visual appearance yet. What arrives is a working prototype of the user skeleton: what users do, in what order, and under what conditions.

Visual Language (from days to weeks*)

*Depends on goals and complexity.

When the product is serious about its design language, a short session will help define how the interface fits the brand. Existing libraries cover the defaults (buttons, inputs, switchers), so the session focuses on what differentiates the product.

UI Room (avg. 3–5 weeks)

The product’s design system, in code. Tokens, color system, components, and component groups. Interactive and configurable, so each element can be placed into the scenarios it needs to survive. Bundled with usage recommendations, code snippets, and links to the product’s actual source.

UI Prototypes (2–3 weeks)

Components, rules, and visual language come together into screens extremely close to the shipping product. The interactive pages or native screens that engineering picks up as the specification.

These modules then expand and adjust, keeping the product’s design environment alive and aligned with production.

Quality assurance

What gets approved in the UI Room is what engineering adapts. No reinterpretation step between the two.

The UI Room replaces the Figma file with a GitHub repository. Components live in the product’s framework, with tokens mapped to the product’s tokens and versioned across branches.

Founders open the UI Room preview to evaluate direction. Engineers pull components into production branches without having to translate between environments. Designers configure and iterate in the same repository rather than a separate design tool. The transfer of artifacts happens continuously, not in a single hand-off event.

This is a structural change in how quality assurance happens.

Efficiency & cost

Because the work happens within the product’s framework, the design cycle compresses into a self-contained loop with its own cadence, independent of engineering’s sprint calendar.

Designers iterate on specific flows in short cycles, make small adjustments to the AI model, and keep developers from being bombarded with “6px left” requests.

The recurring loop is what makes the rest of the economics possible.

(What this does not solve)

This workflow doesn’t remove all friction. A designer’s first version of a component won’t always have the structure a senior developer would give it, and refactors still happen — just earlier, and in much smaller increments. Real data behavior still needs engineering time that design and prototyping can’t compress. Those limits are real.

But it does what it was built to do: design stops being a bottleneck before development starts. What the design team approves is what the product ships, without the usual translation step in between.

We configure and run this workflow with every client.

Mozilla, 2025–26

(Web browsing for AI products.)

Tabstack API is a powerful web content extraction and transformation toolkit designed specifically for AI agent builders. It provides intelligent web scraping capabilities, content processing, and structured data extraction through a simple REST API. Built by developers, for developers and product owners.

Area Agentic AI, API
Environment Web Browsers
Started Jun, 2025
Credits Jacob Ervin Nick Chapman Stafford Brooke
Tabstack — Usage Overview
[TBST-I-1] Usage Overview
Tabstack — Home
[TBST-I-2] Home
Tabstack — Playground
[TBST-I-3] Playground
Tabstack — API Key creation → Success
[TBST-I-4] API Key creation → Success

(Our role)

We established the design language for the entire product, aligning it with Mozilla’s updated visual identity.

The interface had to work for two very different users: a non-technical person managing API keys usage and billing, and a developer testing AI agents in a live playground before writing a single line of integration code.

Full overview will be available in Q3, 2026

[NDA]Bloom Labs, 2025–26

LIB, 2025

(Read-it-later, reborn.)

A mobile app for saving articles and web pages, stripping away ads and layout inconsistencies, and presenting everything in a clean, customizable reading format. It also reads articles aloud.

Area Reader, Productivity
Environment iOS
Started Aug, 2025
Paused Oct, 2025
Credits Nick Chapman
Folio — Saves
[FOLI-I-1] Saves
Folio — Player
[FOLI-I-2] Player
Folio — Reader
[FOLI-I-3] Reader
Folio — Reader (JP)
[FOLI-I-4] Reader (JP)

(Our role)

We adapted existing Pocket patterns to Folio’s visual identity without breaking users’ existing habits. The core challenge was expanding functionality — more ways to interact with saved content — while keeping the reading experience as calm and focused as the original.

Full overview will be available in Q3, 2026

Mozilla, 2024

(Run LLM on your own machine.)

While Hiro can still provide access to popular models via APIs, the main idea is to interact with LLMs locally, directly on the device, using its own resources, so user data stays on their hardware.

Beyond standard chat, it indexes local files of any size for context, supports custom agents with individual settings, and enables them to run in multi-agentic mode to efficiently handle complex tasks.

Area Agentic AI, Local LLMs
Environment macOS, Windows
Started Feb, 2024
Ended Dec, 2024
Credits Nick Chapman
Hiro — Chat → Reasoning
[HIRO-I-1] Chat → Reasoning
Hiro — Chat → Analyzing & Planning
[HIRO-I-2] Chat → Analyzing & Planning
Hiro — Datasets
[HIRO-I-3] Datasets
Hiro — Chat → Analyzing & Planning (JP)
[HIRO-I-4] Chat → Analyzing & Planning (JP)

(Our role)

We started with familiar chat patterns to have a good starting point, and then expanded the interface for genuinely new features: local file access, machine resource management, and multi-agent workflows where several agents share a single context. Many decisions also migrated from the experience we got on Mavii.

Full overview will be available in Q3, 2026

[NDA]Walkway Inc, 2024/25

Mavii Studios, 2023

(Ask anything on the web.)

Mavii was an early experiment, started when the AI boom was just beginning in 2023. It explored how a language model could actively work with the web — finding links, pulling media, assembling information on request. First steps toward agents that adjust their own settings based on context.

Area Web Search, AI
Environment Web Browsers
Started Jan, 2023
Ended Dec, 2023
Credits Nick Chapman Mark Mayo
Mavii — Chat → Generated Response
[MVII-I-1] Chat → Generated Response
Mavii — Home
[MVII-I-2] Home
Mavii — Chat → Reasoning
[MVII-I-3] Chat → Reasoning
Mavii — Chat → Generated Response (JP)
[MVII-I-4] Chat → Generated Response (JP)

(Our role)

We combined new and traditional interaction patterns to determine how AI-to-user communication works best. This included adaptive widgets that reshape themselves based on content type, and an extended side panels that let users explore sources in a familiar search format.

Full overview will be available in Q3, 2026

[NDA]TGM Ventures, 2022

More of our work is on the way: full overviews and new products will be available in Q3, 2026.

(What is Monoface™)

Monoface™ is an independent design team, focused on web & native interfaces. We deal with logic, macro-/micro-interactions, user behavior, visual psychology, and UI systematization.

Leadership / QA Ilya M.
Coordination Dmitry S.
Interface Design / AI Adoption Vlad K.
Interface Design Yura S.

(When you need us)

Startups come to us, when they:

Build an innovative product that requires visual language, interactions and experience defined precisely.

Need functional web or native prototypes built in just weeks to validate ideas far before development starts, with a clear design-to-dev handoff and communication that speeds up iterations.

Want in-house dedication without the hiring and management overhead. An autonomous design team with strong quality assurance, to avoid excessive control.

We build interfaces in the same environment developers work in. HTML/CSS on the product's actual framework, tested in the browser, version-controlled through GitHub. Design decisions are validated before development begins, and what gets approved is what gets built.

(Where we operate)

We operate from Bangkok and Kyiv. Legally registered as Monoface Inc in the United States.

Monday—Friday 11:0019:00
Time in Kyiv
Time in Bangkok

(Start with a clear plan.)

Validate ideas and present to investors early. Solutions work best for startups at MVP phase.

We offer three stages of the design-to-production cycle. Each runs as a standalone engagement or as part of the full sequence, with a predefined scope and an interactive artifact at the end.

Pick the one that fits where the product is right now: a concept to test, a component system to build on, or screens ready to share with engineers.

All adapted to the product’s real environment.

(UX Simulation)

5–10 days* — $5,000

Test the idea before anyone develops it.

(UX Simulation preview)

An interactive prototype on Monoface’s own product-agnostic UI components.

The brand-neutral kit serves as a placeholder for the product’s interface while its visual language is still being decided. Attention goes to how the interaction behaves, and the surface comes later.

Designed for the earliest stage. The idea needs to be tested before the team invests in a visual language or brings in engineers.

Built in code, running in a browser or natively, depending on the product’s nature. The clicks and interactions are the actual ones users will meet in production. Only the visual identity is still a placeholder.

Core flows, navigation, screen states. Everything the team is about to commit engineering time to gets tested here first, before anyone writes the code that depends on those decisions.

Ready to share in 5–10 days.

For web products, a URL anyone can open in a browser, with no third-party software to install. For native products, the OS-specific sharing path depends on goals.

Tip: If the product already has an established UI system, UI Room + UI Prototype will be more accurate. They build on the existing design language rather than on a placeholder.

*Scope: a core set of user flows. For multiple flows, the work extends to weekly engagement ($4,000/week).

Example: AI-search web app.

The module: Query → Response interactions.

What we solve: how the user asks a query, receives a response, attaches sources, reviews them, and asks follow-up questions.

The goal is to verify the concept before anyone commits engineering time to it. Everything beyond the core (expanding source formats, handling edge cases, adding functional modules) continues as weekly engagement, in smaller recurring cycles.

(UI Room)

3–5 weeks* — $15,000

The product's design system, built in code.

(UI Room preview)

A component system written in the product's own framework, whether that is React, Vue, Swift, or something else. Developers receive working modules that already run in the codebase. The handoff skips the usual interpretation step.

Tokens, color, typography, and interaction states get resolved once, inside the runtime that will eventually ship them. What reaches development is fully formed.

Everything works on framework-specific tokens. Color and typography live as a single set of variables that every component reads from. Update a token, and the system updates with it.

Components arrive with the right props, states, and edge cases already defined. Developers pick them up and assemble the UI.

One source for every design decision. Every subsequent screen, page, or prototype draws from the same system, so drift gets caught before code review.

Version control through GitHub.

The design team keeps working, updating tokens and adding components, while development builds with no interruption against a stable version that the ongoing changes do not touch.

*Scope: a defined set of UI components for one product area. For systems with several product surfaces or wide component variation, the weekly engagement ($4,000 / week) continues past this baseline.

Example: AI-search web app.

What we solve (on top of what UX Simulation already validated): Color/typographic systems and component groups. Buttons, switchers, selectors, inputs, modals, side panels, and product-specific components.

This is the foundation from which every subsequent screen is assembled.

Further detailing and expansion continue as weekly engagement, picking up once the product area grows past the initial room.

(UI Prototype)

2–3 weeks* — $10,000

Full-fidelity screens the team can review and engineering can build from.

(UI Prototype preview)

An interactive prototype built entirely from the product’s own UI Room components. It normally runs after the UI Room is ready.

The components already exist: this stage combines them into production-ready screens and flows.

Presentation-ready in a browser tab. You can share the link directly with your team, stakeholders, or investors.

Validation at full fidelity. The prototype shows the product’s actual visual language and rules, without any demo abstractions.

Feedback collected here reflects what the product will feel like once it ships.

*Scope: key screens and user flows within one area.

Structurally, the work proceeds on the UX Simulation + UI Room base, with additional element arrangement logic and their behavior detailed.

If the product needs broader product areas coverage or has multiple linked flows, the weekly engagement ($4,000 / week) continues beyond this baseline.

(The AI Dawn.)

Even a few years ago, it was hard to picture an AI agent sitting beside you as an everyday tool. It writes code, sketches designs, drafts copy, generates images, builds strategies, and much more — all to serve the goals you set.

This is the reality we live in.

It feels like magic, and leads many in our industry, specifically, to the conclusion that designers, developers, and founders no longer need each other. We think this conclusion is hasty.

Yes, it is an uncertain time, where cutting costs and automating feel natural. AI tools grow alongside the economic slowdown and feel like a magic wand that solves problems.

But the real risk is in being radical: excessive reaction, whether it’s accelerating or slowing growth, blinds people and damages markets.

Humanity has seen these patterns before. History has lessons worth revisiting.

(Lesson One)

Fear of change is a huge mistake.
We should accept new rules
to open new doors.

In 70s, music creation was a privilege reserved for professional engineers. Recording a track was a real pain — an extremely expensive process that had to happen in a properly geared studio, with modular synthesizers, rack units the size of a refrigerator, and a person sitting on the floor, plugging in wires by hand just to get a single sound out.

That’s why most people never even tried. The cost of entry was so high that it filtered out everyone except those who could afford the risk.

Chamber Music Festival.
Washington DC.

1970

Chamber Music Festival, Washington DC, 1970

Things changed dramatically when portable synthesizers and sequencers entered the market: everyone could record their own track, sometimes in just minutes.

It pushed boundaries and created a serious challenge for professionals: the musicians’ union in the US lost nearly half of its members between 1976 and 1995, dropping from 331,000 to 150,000¹.

Industry was reacting, sometimes absurdly.

The legendary Moog synthesizer was temporarily banned from commercial sessions². In 1984, picketers showed up outside a concert hall in Oakland to protest a single performer who could produce the sound of an entire band from a self-built electronic instrument³.

People behaved like they always do, but was it self-defense, or fear of losing the way things worked? Let’s see what happened on the market:

Yamaha expected to sell 20,000 units of the DX7. Orders hit 150,000 (!!!) in the first year, and by 1986, its presets appeared in 40% of the number-one singles on the Billboard Hot 100.

Then even more magic happened:

Yamaha released the QY10 — a portable sequencer, synthesizer, and drum machine packed into a box the size of a VHS tape that ran on AA batteries. The press called it a “walkstation.” It didn’t sell in DX7 numbers, but it did something fundamental: it took music production out of the studio entirely. You could write a full arrangement on a train, in a hotel room, or on a park bench.

Imagine a happy man’s face, resting after a hard day, sketching melodies and rhythms on the Yamaha QY10. The whole studio fits in a bag, literally!

Yamaha QY10 — portable sequencer and tone generator

Yamaha QY10.
A portable sequencer and tone generator.

1990

As a result, instead of shrinking, the industry exploded. The recorded music market in the US nearly tripled between the early 1980s and 1999, peaking at $23.7 billion adjusted for inflation.

Today, over 11 million artists have released music on streaming platforms. Much greater than 331,000 in 1976, isn’t it?

(Lesson Two)

Cutting teams is not a growth strategy.

Personal computers, after entering the market, made business owners way too excited about cutting headcounts. In the reality of economic pressure, automation (with its promises) sounded like an opportunity the founders never met.

This is where mass layoffs happened.

General Electric’s workforce dropped from 404,000 to 292,000 by 1989. Hundreds of thousands of people and entire professions shrank by half or more within two decades. Psychologically, the layoffs caused a deep sense of failure and a blow to self-worth that many never recovered from. In the 1980s, laid-off workers were typically rehired once the economy recovered. By the early 1990s, this stopped working — jobs were simply disappearing for good.

Did companies do great by that? Well, it’s ironic. Between 1982 and 2000, S&P 500 firms that cut jobs without restructuring their businesses showed no better returns than those that did not. People who stayed made more mistakes, morale dropped, and quality got worse. Savings existed in a spreadsheet, but in practice, most of these companies just got slower.

This pattern repeated across industries: companies that treated people as costs to cut started underperforming those that treated them as assets to develop.

Commodore 64 — 8-bit home computer

Commodore 64.
8-bit home computer.
Commodore International.

1982

Decades passed, and now we can see billions¹⁰ of people using computing devices every day. Engineers, professors, designers, accountants, architects, artists, writers, students — all are still busy at work.

¹⁰ ∼250M+ PCs are being shipped every year, 262.7M in 2024 IDC↗, 270M+ units in 2025 Gartner↗

And the reason why the collapse didn’t happen is simple: the power of computation is nothing without the operator who sets the right direction and controls it.

The companies that win give their people better tools and let them use them well. For those rushing to replace everything with LLMs, history already shows the outcome.

(The opportunity we see)

AI is creating an impressive shift in interfaces.

For decades, design and development happened in separate rooms, with separate tools and languages. A designer would create a vision in Figma. Weeks later, a developer would rebuild it in React. Details were lost in the gap. The product that shipped was never quite the one designed.

The split between design and development was a limitation of the tools, and felt natural only because these tools made it so.

AI dissolved this wall.
This change is real and permanent.

Designers can now work where the product actually runs. Write a component, open it in the browser, and see if it holds up. If so, a developer picks it up and adapts it to the existing architecture. No Figma-to-React translation, no weeks of back-and-forth over details that got lost in the handoff. Shared language that teams spent years trying to build through docs and processes just turns out. It emerges naturally when everyone works in the same environment.

It doesn’t mean the friction is totally gone. A designer’s first component in code won’t have the structure a developer would give it. A developer’s first layout decision won’t have the visual rhythm and balance that a designer would build. The shared environment creates its own friction — different from the old handoff, but real. This is where professionals are still needed and non-replacable, if you want your product to be precise, consistent, and ready to compete.

(Where this leads)

The situation is more complex than any popular narrative suggests.

Narrative A:
“AI will replace everyone.”

That’s unlikely in the next few years at least.

Companies are networks of relationships, context, and tacit knowledge. Replace a designer with AI, and six months later, you realize you lost the person who knew your product better than anyone. That kind of knowledge doesn’t transfer through a prompt.

Narrative B:
“AI will not change anything.”

These changes are already here. Tasks that once took a team and a week now take one person and a day. That’s a real shift.

(What we actually think)

We think the next few years will be chaotic. Companies will cut people for short-term savings, see quality drop, and hire again, often at higher cost.

Startups will launch with two or three people + AI. Many of them will fail, and the reason won’t be the AI’s intelligence. Unoptimized workflows and a lack of attention to detail will eat them from the inside. Teams that survive will show that artificial intelligence, combined with a strong specialist, can produce outsized results.

On the design specifically, the key question is how quickly AI will learn to operate within a large organizational context without hallucinating. If that happens in the next five years, well, even systems thinking could become automated.

If it doesn’t, then the approach we have built at Monoface™ is exactly right for the decade ahead.

For easy project initiation,
start with solutions:

UX Simulation 1 week – $5,000
UI Room 3-5 weeks – $15,000
UI Prototype 2 weeks – $10,000
  • Hire an autonomous design team with strong quality assurance, and avoid excessive stress and control.
  • Get started smoothly with the core product area from scratch.
  • Receive the first results to discuss after week 1.
  • Get validation-ready artifacts in 1–5 weeks for a single option, and 6–8 weeks for a full cycle (if building from scratch).
  • Share with the team, stakeholders, or investors to discuss and test your product early.
  • Components and prototypes are built in HTML/CSS, on the product’s actual framework. Everything is structured for direct adaptation.
  • A single source of truth for design decisions. When the UX Simulation, UI Room, or UI Prototype is built, every subsequent screen, page, or prototype draws from the same system.

For complex goals and flexibility,
continue with ongoing engagement:

Weekly retainer $4,000 / week
Maintenance $2,000 / week

*Switch and pause anytime

  • Get more control and flexibility over the process of complex goals.
  • Get an in-house-like dedication without the hiring and management overhead — the team is ready to start in just 1 week.
  • Switch between Weekly and Maintenance options to manage team effort and budget.
  • Pause and renew the work anytime.

When results can’t wait,
get the most out of the Monoface™ team:

Xtra $8,000 / week

*Switch and pause anytime

  • Get the full team (4 designers) involved at their maximum capacity.
  • No compromises: save the same Weekly Retainer speed with x2–4 outcome and no quality loss.
  • This is the maximum configuration Monoface™ offers to clients at present.

    If you have custom requests, reach out to us at hello@monoface.design. We are always open to discussions.

What’s next:

If you have a product to discuss, book an introductory call with us, or drop a line via email — we are always open to discussions.

See you soon in your inbox!

With respect to what you build,
Monoface™ Team

\ (^–^) /

Here you can find documents governing the use of monoface.design and our services.

(Who we are)

Monoface Inc (“we”, “us”) is a US-registered company. We design interfaces for startups and operate the monoface.design website and provide design services to clients worldwide.

For anything related to this policy, reach us at hello@monoface.design.

(What this policy covers)

This policy explains what personal data we collect, why we collect it, and what we do with it. It applies to everyone who visits our website or works with us, regardless of location.

(What we collect)

When you fill out the contact form on our website, we collect:

  1. Name
  2. Email address
  3. Company name (optional)
  4. Website URL (optional)
  5. Any information you include in the message field

When you visit the website, we automatically collect:

  1. IP address
  2. Browser type and version
  3. Pages visited, time spent, referring URL
  4. Device identifiers

We use cookies and similar technologies for analytics. Details are in the Cookies section below.

(Why we collect it)

We use your data for the following purposes:

  1. To respond to your inquiry and provide information about our services.
  2. To communicate during and after a project engagement.
  3. To maintain and improve the website.
  4. To analyze how the website is used.
  5. To comply with legal obligations.

We don’t sell your personal data. We don’t use it for advertising.

(Legal basis for processing)

If you’re in the EU or UK, we process your data under the following legal bases:

  1. Consent: when you submit a contact form or subscribe to communications.
  2. Contractual necessity: when processing is required to deliver services you’ve engaged us for.
  3. Legitimate interests: when we analyze website usage or improve our services, provided this doesn’t override your rights.
  4. Legal obligation: when we’re required to retain data by law.

(Cookies)

We use cookies to understand how visitors use the website. This includes analytics tools that collect anonymized usage data.

You can control cookies through your browser settings. Disabling cookies won’t prevent you from using the website, but some functionality may be limited.

We don’t use cookies for advertising or tracking across other websites.

(Third-party services)

We use third-party tools to host the website, process analytics, and manage communications. These providers access your data only to perform tasks on our behalf and are bound by confidentiality obligations.

We don’t share your data with third parties for their own marketing purposes.

(Data retention)

We keep your data only as long as there’s a reason to:

  1. Contact form submissions: for the duration of any business relationship and up to 2 years after last contact.
  2. Analytics data: up to 14 months.
  3. Contractual records: up to 6 years after the engagement ends, as required by applicable law.

After these periods, data is deleted or anonymized.

(International transfers)

Our servers and services may be located outside your country. If your data is transferred outside the EU or UK, we ensure appropriate safeguards are in place, including Standard Contractual Clauses where applicable.

(Your rights)

Depending on where you are, you may have the right to:

  1. Access: request a copy of the data we hold about you.
  2. Correction: request that we fix inaccurate or incomplete data.
  3. Deletion: request that we delete your data, subject to legal requirements.
  4. Restriction: request that we limit how we process your data.
  5. Portability: receive your data in a structured, machine-readable format.
  6. Objection: object to processing based on legitimate interests.
  7. Withdraw consent: where processing is based on consent, withdraw it at any time.

If you’re in the EU, you can lodge a complaint with your local supervisory authority. In the UK, this is the Information Commissioner’s Office (ico.org.uk).

If you’re in California or another US state with privacy legislation (CCPA/CPRA, VCDPA, CPA, CTDPA), you may have additional rights, including the right to know what data we’ve collected and the right to request deletion. We don’t sell personal data, so opt-out rights related to data sales don’t apply.

To exercise any of these rights, email us at hello@monoface.design. We’ll respond within 30 days.

(Children)

Our website and services aren’t directed at anyone under 16. We don’t knowingly collect data from children.

(Security)

We take reasonable measures to protect your data, including encryption in transit and access controls. No system is perfectly secure, but we treat your data with the same care we’d want for our own.

(Links to other sites)

Our website may link to third-party sites. We’re not responsible for their content or privacy practices.

(Changes to this policy)

We may update this policy as our practices or applicable laws change. Updates will be posted on this page with a revised date. If changes are significant, we’ll make reasonable efforts to notify you.

(Contact)

Monoface Inc
hello@monoface.design