AI

Introducing Scribe Mode: The Digital Consultant that listens, thinks, and builds your application at the speed of conversation.

Labels
AI(15) AJAX(112) App Studio(10) Apple(1) Application Builder(245) Application Factory(207) ASP.NET(95) ASP.NET 3.5(45) ASP.NET Code Generator(72) ASP.NET Membership(28) Azure(18) Barcode(2) Barcodes(3) BLOB(18) Business Rules(3) Business Rules/Logic(140) BYOD(13) Caching(2) Calendar(5) Charts(29) Cloud(14) Cloud On Time(2) Cloud On Time for Windows 7(2) Code Generator(54) Collaboration(11) command line(1) Conflict Detection(1) Content Management System(12) COT Tools for Excel(26) CRUD(1) Custom Actions(1) Data Aquarium Framework(122) Data Sheet(9) Data Sources(22) Database Lookups(50) Deployment(22) Designer(178) Device(1) Digital Workforce(3) DotNetNuke(12) EASE(20) Email(6) Features(101) Firebird(1) Form Builder(14) Globalization and Localization(6) HATEOAS(6) How To(1) Hypermedia(3) Inline Editing(1) Installation(5) JavaScript(20) Kiosk(1) Low Code(3) Mac(1) Many-To-Many(4) Maps(6) Master/Detail(36) Micro Ontology(5) Microservices(4) Mobile(63) Mode Builder(3) Model Builder(3) MySQL(10) Native Apps(5) News(18) OAuth(9) OAuth Scopes(1) OAuth2(13) Offline(20) Offline Apps(4) Offline Sync(5) Oracle(11) PKCE(2) Postgre SQL(1) PostgreSQL(2) PWA(2) QR codes(2) Rapid Application Development(5) Reading Pane(2) Release Notes(186) Reports(48) REST(29) RESTful(30) RESTful Workshop(14) RFID tags(1) SaaS(7) Security(81) SharePoint(12) SPA(5) SQL Anywhere(3) SQL Server(26) SSO(1) Stored Procedure(4) Teamwork(15) Tips and Tricks(87) Tools for Excel(3) Touch UI(93) Transactions(5) Tutorials(183) Universal Windows Platform(3) User Interface(337) Video Tutorial(37) Web 2.0(100) Web App Generator(101) Web Application Generator(607) Web Form Builder(40) Web.Config(9) Workflow(28)
Archive
Blog
AI
Monday, November 17, 2025PrintSubscribe
Feature Spotlight: Meet the Scribe

The "Digital Consultant" That Listens, Thinks, and Builds.

We are thrilled to introduce Scribe Mode, the newest persona in the App Studio. If the Tutor is your teacher and the Builder is your engineer, the Scribe is your silent partner in the room.

For years, consultants and architects have faced the same friction: the "Translation Gap." You spend an hour brilliantly brainstorming requirements with a client, but when the meeting ends, you are left with messy notes and a blank screen.

The Scribe eliminates the blank page. It acts as a Prompt Compiler, listening to your conversation, filtering out the noise, and constructing the application in the background while you talk.

How It Works: The "Clarity Gauge"

The Scribe isn't just a voice recorder; it is a Real-Time Requirements Engine.

  1. Ambient Listening: Switch to "Scribe Mode" and hit Record. The Scribe uses your browser’s native speech engine to transcribe the meeting in real-time.
  2. The Director’s Remark: Need to steer the AI? You can type "corrections" or "technical specifics" directly into the chat buffer while the recording continues. The Scribe treats your typed notes as high-priority instructions to override or clarify the spoken text.
  3. The Ephemeral Cheat Sheet: When you pause to think (or hit Stop), the Scribe analyzes the conversation against your app’s live metadata. It generates a Cheat Sheet—a proposed plan of action.
    • The Magic: If you keep talking, the Cheat Sheet vanishes and rebuilds. It is a living "Clarity Gauge." If the Cheat Sheet looks right, you know the AI (and the room) is aligned. If it looks wrong, you just keep talking to fix it.
  4. Instant Materialization: Click "Apply All," and the App Studio executes the plan. By the time your client returns from a coffee break, the features you discussed are live in the Realistic Model App (RMA).

The Scribe turns "Talk" into "Software" at the speed of conversation.

For Our Consultants and Partners: Your New Superpower

If you build apps for clients, Scribe Mode is your new competitive advantage. It transforms you from a note-taker into an Architect who delivers results in the room.

  • Win the Room: Don't take notes; take action. Use Scribe Mode during the discovery meeting to generate a working prototype in real-time. Show your client the software they asked for before the meeting ends.
  • Instant Proposals: Use the Builder to generate a technical SRS (Software Requirements Specification) and LOE (Level of Effort) estimation based on the meeting transcript. Turn a 30-minute chat into a professional proposal instantly.
  • The "Wizard" Effect: You remain the expert. The Scribe handles the typing, configuration, and schema design, freeing you to focus on strategy and client relationships.

Choose Your Partner: Tutor vs. Builder vs. Scribe

Feature

The Tutor

The Builder

The Scribe

Role

The Mentor (Teacher)

The Engineer (Maker)

The Silent Partner (Listener)

Best For...

Learning "How-to," navigating the studio, and troubleshooting errors.

Executing complex tasks, generating schemas, and building specific features instantly.

Stakeholder meetings, "rubber duck" brainstorming, and capturing requirements in real-time.

Interaction

Conversational: Ask questions like "How do I filter a grid?"

Directive: Give commands like "Create a dashboard for Sales."

Ambient: Runs in the background. Listens to voice (mic) or accepts unstructured notes.

Input Type

Natural Language Questions.

Precise Instructions & Prompts.

Stream of Consciousness (Voice or Text) + "Director's Remarks."

Output

Explanations + Navigation Pointers (guides you to the screen).

Cheat Sheet with executable steps + "Apply All" button.

Ephemeral Cheat Sheet (Self-correcting plan) + "Apply All" button.

Context

Knows the documentation (Service Manual) and your current screen location.

Knows your entire project structure (Schema, Controllers, Pages) to generate valid code.

Knows your project structure + synthesizes the entire conversation history into a final plan.

Cost

Free (Included in all editions).

Paid (Consumes Builder Credits).

Paid (Consumes Builder Credits for synthesis).

Summary: Which Mode Do I Need?

  • Use Tutor when you want to do it yourself or do not want to consume credits, but need a map.
  • Use Builder when you know what you want and want the AI to do it for you. Requires Builder Credits.
  • Use Scribe when you are figuring it out with a client or team and want the app to materialize as you speak. Requires Builder Credits.
Friday, November 14, 2025PrintSubscribe
The "Mercury" Incident: Why the Global AI Brain is Dangerous

Imagine the scene. A top Sales Director at a major industrial firm opens their AI assistant. They are looking for a critical status update on a potential new client, "Mercury Logistics."

The Director types: "What is the current status of the Mercury Lead?"

The AI pauses for a moment, its "thinking" animation spinning, and then replies with supreme confidence:

"The Mercury Lead is currently unstable and highly toxic. Safety protocols indicate a high risk of contamination during the negotiation phase. Recommend immediate containment protocols."

The Sales Director stares at the screen in horror. Did they just tell the AI to treat a high-value client like a biohazard?

What happened?

The AI didn't break. It did exactly what it was designed to do. It acted as a "Global Brain," searching the company's entire centralized Data Lake for the keywords "Mercury" and "Lead."

The problem was that the company also has a Manufacturing Division that uses the chemical elements Mercury (Hg) and Lead (Pb) in production testing. The AI, lacking context, conflated a "Sales Lead" with a "Heavy Metal," resulting in a catastrophic hallucination.

This is the "Mercury" Incident—a perfect example of why the industry's obsession with monolithic, all-knowing AI systems is a dangerous dead end for the enterprise.

The Problem with the "Genius" Model (The Global Ontology)

The current trend in enterprise AI is to build a "Genius." The promise is seductive: "Dump all your data—from Salesforce, SAP, Jira, and SharePoint—into one massive Vector Database or Data Lake. The AI will figure it out."

This creates a Global Ontology—a unified, but deeply confused, view of the world.

The Failure Mode: Semantic Ambiguity

The root cause of the "Mercury" Incident is Semantic Ambiguity. In a global context, words lose their meaning.

  • In Sales, "Lead" means a potential customer.
  • In Manufacturing, "Lead" means a toxic metal.
  • In HR, "Lead" means a team manager.

When you force an AI to reason over all of these simultaneously, you are inviting disaster. The AI has to guess which definition applies based on subtle clues in your prompt. If it guesses wrong, it hallucinates.

The Hidden Cost: Token Bloat

To fix this, developers have to engage in "Prompt Engineering," feeding the model thousands of words of instructions: "You are a Sales Assistant. When I say 'Lead', I mean a customer, NOT a metal. Ignore data from the Manufacturing database..."

This is expensive. Every time you send that massive instruction block, you are paying for thousands of tokens, slowing down the response, and praying the model doesn't get confused anyway.

The Solution: The "Employee" Model (The Micro-Ontology)

There is a better way. It’s boring, it’s safe, and it mimics how human organizations actually work.

When you walk into a Hospital, you don't ask the receptionist for a pizza quote. You know by the context of the building that you are there for medical issues.

Code On Time applies this same logic to AI through the concept of the Digital Co-Worker and the Micro-Ontology.

Standing in the Right Room

Instead of a single "Global Brain," Code On Time builds a Society of Apps.

  • You have a CRM App.
  • You have a Manufacturing App.
  • You have an HR App.

Each app defines its own universe through a Micro-Ontology, delivered automatically via its HATEOAS API.

Crucially, this isn't a cryptic technical schema. The API entry point faithfully reproduces the Navigation Menu of the visible UI, complete with the same human-friendly labels and tooltips. This places the Co-Worker on the exact same footing as the human user.

Because the AI reads the exact same map as the human, it doesn't need to be "trained" on how to use the app. It just looks at the menu and follows "Sales Leads" because the tooltip says "Manage potential customers."

The Mercury Incident: Solved

Let's replay the scenario with a Code On Time Digital Co-Worker.

Scenario A: The User is in the CRM App. The user logs into the CRM. The Digital Co-Worker inherits their context. The "Manufacturing" database literally does not exist in this world.

The Prompt: “What is the current status of the Mercury Lead?”

The Action: The Co-Worker queries the only "Lead" table it can see—the Sales Leads table. There is zero ambiguity.

The Outcome:

"The Lead 'Mercury Logistics' is in the 'Proposal' stage. The closing probability is 60%."

Scenario B: The User is in the Manufacturing App. The user logs into the production floor system.

The Prompt: “What is the current status of the Mercury Lead?”

The Action: The Co-Worker queries the Safety Data Sheets.

The Outcome:

"Warning: Detected 'Lead' and 'Mercury' contamination in Lot #404. Status: Quarantine."

By restricting the context to the domain of the application, the hallucination becomes mathematically impossible. The Co-Worker cannot conflate data it cannot see.

The Best of Both Worlds: Federated Scalability

But what if you need data from both systems?

This is where Federated Identity Management (FIM) comes in. It acts as the trusted hallway between your apps.

If the Sales Director intentionally needs to know if "Mercury Logistics" has any outstanding safety violations that might block the deal, they can explicitly ask the Co-Worker to check.

The Co-Worker, using its FIM passport, "walks down the hall" to the Manufacturing App. It enters that new Micro-Ontology, performs the search in that context, and reports back.

This turns "Accidental Contamination" into "Intentional Discovery." It keeps the boundaries clear while still allowing for cross-domain intelligence.

The Verdict: Boring is Safe

The promise of a "Genius AI" that knows everything is a marketing fantasy that leads to expensive, fragile, and dangerous systems.

Enterprises don't need an AI that knows everything. They need an AI that knows where it is.

  • Global Brain: High Cost, High Risk, Unpredictable.
  • Digital Co-Worker: Low Cost, Zero Risk, Deterministic.

By embracing the "boring" architecture of isolated Micro-Ontologies, you don't just save money on tokens. You save yourself from the nightmare of explaining to a client why your AI called them toxic.

Labels: AI, Micro Ontology
Wednesday, November 12, 2025PrintSubscribe
The Fractal Workflow: How AI Builds and Runs Your App

We have talked about the AI Builder for developers and the Digital Co-Worker for end-users. At first glance, these might seem like two different tools (one for coding, one for business).

But they are actually the same engine, running on the same logic.

At Code On Time, we have built a Fractal Architecture that repeats itself at design time and runtime. The way you build the app is exactly the way your users will use it.

The Developer's Loop (Design Time)

When you sit down with the AI Builder in App Studio, the workflow is clear:

  1. Prompt: You state a goal ("Create a Sales Dashboard").
  2. Plan: The Builder analyzes the metadata and presents a Cheat Sheet (a step-by-step plan of action).
  3. Approval: You review the plan. You are in the director's seat. You click "Apply All".
  4. Execution: The Builder operates the App Explorer (the "Invisible UI" of the Studio) to create pages, controllers, and views.
  5. Result: A new feature exists.

You are never locked into the AI. At any moment, you can take the wheel and work directly with the App Explorer. Because the AI Builder uses the exact same tools and follows the exact same tutorials as a human developer, its work is transparent and editable. You can use the Tutor to learn the ropes or the Builder to speed up the heavy lifting, but the manual controls are always at your fingertips

The User's Loop (Runtime)

When your user sits down with their Digital Co-Worker, the workflow is identical:

  1. Prompt: They state a goal ("Approve all pending orders").
  2. Plan: The Co-Worker analyzes the HATEOAS API (the metadata) and formulates a sequence of actions.
  3. Approval: The Co-Worker presents an Interactive Link or summary. "I found 5 orders. Please review and approve."
  4. Execution: The user clicks "Approve," or the Co-Worker executes the API calls directly if permitted.
  5. Result: The business process advances.

End-users have the same flexibility. They can interact with the standard rich user interface, or you can build a custom front-end powered by the HATEOAS API. The Co-Worker prompt is available everywhere: docked inside the app for context, or switched to fullscreen mode for a pure, chat-first experience. You can even configure the app to be 'Headless,' where users interact exclusively via the prompt, or remotely via Email and SMS using secure Device Authorization.

The Field Worker's Loop (Connection-Independent)

The fractal pattern extends to the very edge of the network. When your Field Workers operate in isolation, they aren't just viewing static pages; they are interacting with a complete, local instance of the application logic.

The Setup (Offline Sync): Before the loop begins, the Offline Sync component performs the heavy lifting. Upon login, it analyzes the pages marked as "Offline" and downloads their dependencies. It fetches the JSON Metadata (the compiled definitions of your Controllers) and the Data Rows (Suppliers, Products, Categories), storing them in the device's IndexedDB.

The Runtime Loop:

  1. Prompt: The user taps "New Supplier".
  2. Plan: The Touch UI framework detects the offline context. Instead of calling the server, it activates the Offline Data Processor (ODP). The ODP consults the Local Metadata (the cached JSON controller definition) to understand the form structure.
  3. Approval: The ODP generates the UI instantly. It alters the standard behavior to fit the local context: unlike online forms which require a server round-trip to establish IDs, the ODP renders the createForm1 view with the Products DataView immediately visible.
  4. Execution: Execution: The user enters the supplier name and adds five products to the child grid. The ODP simulates these operations in memory, enforcing integrity by validating the entire "Master + 5 Details" graph as a single unit before allowing the save.
  5. Result: The ODP bundles the master record and child items into a single Transaction Log sequence. It then updates the local state registry (OfflineSync.json) and persists the new data files to IndexedDB. This "checkpoint" ensures that even if the device loses power, the pending work is safe until the user taps Synchronize.

This proves that "Offline" is not just a storage feature; it is a full-fidelity Transactional Workflow powered by the exact same metadata that drives your AI and your Web UI.

The Creator's Loop (Runtime Build)

The fractal pattern goes one step deeper. In the Digital Workforce, the line between "User" and "Developer" blurs.

With Dynamic Data Collection, your business users can define new data structures (Surveys, Audits, Inspections) directly inside the running application, using the same logic you used to build it.

  1. Prompt: The user tells the Co-Worker: "Create a daily fire safety checklist for the warehouse."
  2. Plan: The Co-Worker (acting as a runtime Builder) generates the JSON definition for the survey, effectively "coding" a new form on the fly.
  3. Approval: The user reviews the structure in the Runtime App Explorer - a simplified version of the tool you use in App Studio.
  4. Execution: The definition is saved to the database (not code), instantly deploying the new form to thousands of offline users.
  5. Result: A new business process is materialized without a software deployment.

This proves that the Axiom Engine isn't just a developer tool; it is a ubiquitous creation engine available to everyone in your organization.

Powered by the Axiom Engine

This symmetry is not an accident. It is the Axiom Engine in action.

  • For the Developer: The Axiom Engine navigates the App Structure (Controllers, Pages) to build the software.
  • For the User: The Axiom Engine navigates the App Data (Orders, Customers) to run the business.

By learning to build with the AI, you are simultaneously learning how to deploy it. You aren't just coding; you are training the workforce of the future using the exact same patterns you use to do your job.

You Are the Director

In this fractal architecture, the role of the human (whether developer or end-user) shifts from "Operator" to "Director."

You are not being replaced; you are being promoted. The AI cannot do anything that isn't defined in the platform's "physics."

  • On the Build Side: The App Explorer is the boundary. The AI Builder cannot invent features that don't exist in the App Studio. It can only manipulate the explorer nodes that you can manipulate yourself.
  • On the Run Side: The HATEOAS API is the boundary. The AI Co-Worker cannot invent business actions that aren't defined in your Data Controllers. It can only click the links that you have authorized.
    • However, within that boundary, you have 100% Data Utility. Because the Agent sees exactly what you see, it can answer specific questions like "What is Rob's number?" immediately, provided you have permission to view that data.

The AI provides the labor, but you provide the intent. You direct the show, confident that the actors can only perform the script you wrote.

Labels: AI