Security and data handling
We do not turn AI loose on your customers and hope for the best.
Plain-English answers on how we handle your data, train our automations, test before launch, and keep watching after launch.
The fear is reasonable. AI gets tones wrong, makes things up, quotes prices it should not quote, and leaks information it should not see. The first ninety percent works. It is the last ten percent that will embarrass you. Our job is to take that ten percent off the table before anything goes live, and to keep watching after it does.
Where your data lives
AI safety and control.
Client information generally does not leave the systems where it already lives. Whenever possible, we connect AI tools to your existing systems rather than copying your data into separate platforms. Your customer records stay in your CRM. Your job records stay in your dispatch system. The AI reads what it needs through approved connections.
When a workflow does need a third-party AI platform, we use enterprise-grade providers with contractual data protections. Your data is not used to train public AI models. We use role-based access and restricted data scopes so an AI assistant only sees the information it needs to do its specific job.
What this means in practice
- Your data stays in your systems whenever the workflow allows it.
- Third-party AI platforms used only when contractually protected.
- Your data is not used to train public AI models.
- Role-based access. The AI sees only what it needs.
- You decide what the AI is allowed to access. Not us. Not the vendor.
- NDAs available on request, signed before any review or audit work begins.
Approved knowledge
What your booking agent or chatbot is allowed to know.
A booking agent is only as trustworthy as the information it has been trained on. We build agents that answer from a knowledge base you approve before launch. That base can include your website pages, service manuals, policies, pricing rules, and FAQ documents. Nothing else. The agent is restricted to what you have authorized.
When the agent does not know an answer, it does not guess. It tells the customer it is not sure, collects the contact information, and hands the conversation to a person on your team. It cannot recommend competitors, invent prices, or promise warranties and availability that do not exist.
Sources we can train on
- Website pages
- Service and product manuals
- Internal SOPs you choose to include
- Pricing sheets and policy documents
- FAQ libraries
- Anything else you approve in writing
Hard rules every agent follows
- If it does not know, it says so and collects contact info.
- It does not recommend competitors.
- It does not invent prices, warranties, policies, or availability.
- It hands off to a real person on the topics you decide.
Testing
Testing before launch.
A working AI is not the same as a launched AI. Before any automation goes live, we run it through a structured testing phase that simulates the messy reality of how customers actually behave. The client gets to test it before customers do. The client signs off before launch.
- 01Common customer questions
- 02Edge cases and unusual requests
- 03Pricing questions, including the ones where the answer should be "I will have someone call you"
- 04Scheduling scenarios across business hours, weekends, and holidays
- 05After-hours behaviour, including the handoff path
- 06Angry, confused, or vague customer messages
- 07Hand-off triggers: when the AI should stop and bring in a person
- 08Failure modes: what happens when the integration is down or the data is wrong
Testing typically runs one to three weeks depending on scope. The client gets a written testing report and approves go-live in writing. Final approval is yours, not ours.
Oversight
Human oversight, always.
We do not build automations that run forever without human review. Every system has a person on your team who owns it. Every system has clear rules for when it should escalate. Every system has an off switch.
You define the topics the AI is not allowed to answer. You define the topics it must escalate. We monitor early conversations after launch, usually for the first thirty days, and tune the system based on real customer interactions. If the AI gives a bad answer, we can review the log, identify the cause, and adjust within hours.
If something goes wrong
We can pause or shut down the system in minutes. You will not be waiting on a vendor support ticket while a customer is on the line.
If you want to expand
The same monitoring process becomes the foundation for adding new capabilities. You will know what works before you scale it.
Ongoing service
What ongoing service includes.
Every BitDepth automation includes an ongoing service package. AI systems drift. Your business changes. Knowledge bases go stale. The work after launch is the difference between an automation that compounds value and one that quietly degrades into a liability.
Included every month
- Conversation review and quality monitoring
- Error monitoring and alerting
- Prompt updates as your business evolves
- Knowledge base updates (new services, new prices, new policies)
- Performance reporting
- A scheduled performance check-in
- Reasonable support requests
Pricing depends on scope and is included in your engagement quote. Ongoing service is not optional on automations that interact with your customers. We do not ship and disappear.
Vendor neutrality
Vendor neutrality.
We do not sell software. We are not a reseller. We do not get paid by any platform vendor for recommending their tools. Our recommendations are based on what fits your business, your existing systems, and your budget.
If a tool you already own is the right answer, that is the answer. If the right answer is a tool we have never used, we will tell you and help you find the right partner. The work is the recommendation, not the sale.
Have a specific question we did not answer?
Ask. Most of the questions buyers raise about safety and data we have heard before, and the answer is usually shorter and simpler than the question.