Automated Insights Policy

AI and Automated Insights Policy

This Policy explains how ZoikoTime uses artificial intelligence, machine learning, rules-based automation, statistical analysis, anomaly detection, and automated workforce intelligence features — and the governance obligations that apply.

ZoikoTime is designed as workforce assurance and performance intelligence infrastructure, not a surveillance-first product. AI and automated insights must be configured and used lawfully, proportionately, explainably, and under meaningful human oversight.

1. Purpose and Legal Status

This AI and Automated Insights Policy explains how ZoikoTime uses artificial intelligence, machine learning, rules-based automation, statistical analysis, anomaly detection, and automated workforce intelligence features within the ZoikoTime platform.

The Policy is designed to support responsible deployment, worker transparency, human oversight, legal defensibility, and enterprise procurement review. It does not replace the Terms of Service, Data Processing Addendum, Privacy Notice, Worker Transparency Notice, or any signed Order Form.

3. Core AI Governance Principles

The following principles apply to all AI-enabled and automated insight features within ZoikoTime:

5. What ZoikoTime AI Does Not Do

Unless separately documented in writing, contracted for, and lawfully configured, ZoikoTime does not provide:

8. Prohibited and Restricted Uses

Customers must not use ZoikoTime AI or automated insights for any prohibited, unlawful, deceptive, discriminatory, or disproportionately intrusive purpose, including:

9. Human Oversight and Decision Protocol

ZoikoTime AI and automated insights must be used under a human oversight model. Human review must be meaningful, documented, and capable of changing the outcome.

A reviewer must consider the underlying evidence, contextual factors, worker response, applicable policies, and any dispute or correction submitted by the Worker. Human review must not be a rubber-stamp exercise.

Where a Customer uses ZoikoTime outputs as part of a significant workforce decision, the Customer must maintain records showing the human reviewer, evidence considered, worker response or opportunity to respond, applicable policy, decision rationale, and any appeal or correction outcome.

12. Explainability, Confidence Scores, and Evidence

ZoikoTime AI features may provide explanations, supporting indicators, source references, confidence levels, or evidence links depending on feature availability and configuration.

Confidence scores and automated flags are not determinations of truth, misconduct, fraud, or legal non-compliance. They are signals for review.

Customers must train authorized reviewers to understand that a low-confidence, ambiguous, conflicting, or incomplete record requires additional context and must not be treated as conclusive.

13. Accuracy, Limitations, and Contestability

AI and automated systems may produce incomplete, inaccurate, delayed, context-poor, or misleading outputs. Work patterns may be affected by device settings, offline work, accessibility tools, network conditions, integration errors, leave records, role differences, meetings, client work, non-keyboard work, care responsibilities, disabilities, religious practices, local labor rules, or approved exceptions.

Customers must provide a practical route for Workers to contest, explain, or correct records that may affect them. Contestability should include a clear recipient, review timeframe, evidence review, decision record, correction mechanism, and escalation route where appropriate.

15. Data Use, Model Training, and Customer Data Protection

ZoikoTime does not claim ownership of Customer Data. Customer Data remains subject to the Agreement, Data Processing Addendum, Privacy Notice, and applicable law.

Unless expressly stated in a signed written agreement, ZoikoTime will not use identifiable Customer Data to train foundation models or general models for the benefit of other customers. ZoikoTime may use aggregated, de-identified, or statistical information to maintain, secure, operate, improve, benchmark, or develop the platform where permitted by the Agreement and applicable law, provided such information does not identify Customer, Workers, or individuals.

17. High-Risk and Regulated Deployment Conditions

Certain AI use cases in employment, worker management, workforce monitoring, automated decision support, or significant employment decisions may be regulated as high-risk in particular jurisdictions. Customers must assess whether their deployment triggers these requirements before enabling or relying on relevant features.

For EU deployments: assess the EU AI Act, GDPR, local labor law, and works council rules. For UK deployments: assess UK GDPR, Data Protection Act 2018, ICO employment practices guidance, and consultation rules. For US deployments: assess federal, state, city, employment, privacy, wage-hour, anti-discrimination, biometric, and automated decision tool requirements.

Contact ZoikoTime

For questions about this document or your legal rights:

1401 21st Street, Suite R, Sacramento, CA 95811, USA
European HQ: 67-69 Great Portland Street, 5th Floor, London W1W 5PF, UK
Scroll to Top