AI and Automated Insights Policy
This Policy explains how ZoikoTime uses artificial intelligence, machine learning, rules-based automation, statistical analysis, anomaly detection, and automated workforce intelligence features — and the governance obligations that apply.
ZoikoTime is designed as workforce assurance and performance intelligence infrastructure, not a surveillance-first product. AI and automated insights must be configured and used lawfully, proportionately, explainably, and under meaningful human oversight.
1. Purpose and Legal Status
This AI and Automated Insights Policy explains how ZoikoTime uses artificial intelligence, machine learning, rules-based automation, statistical analysis, anomaly detection, and automated workforce intelligence features within the ZoikoTime platform.
The Policy is designed to support responsible deployment, worker transparency, human oversight, legal defensibility, and enterprise procurement review. It does not replace the Terms of Service, Data Processing Addendum, Privacy Notice, Worker Transparency Notice, or any signed Order Form.
3. Core AI Governance Principles
The following principles apply to all AI-enabled and automated insight features within ZoikoTime:
- Human-in- command: ZoikoTime outputs are decision-support signals and must not replace legally required human judgment.
- Transparency: Workers should receive clear, timely, and accessible information about what is collected, why, how it is used, and how to challenge or correct records.
- Proportionality: Customers must configure features in a way proportionate to legitimate business, legal, operational, security, payroll, billing, or compliance purposes.
- Explainability: Automated insights should be capable of being explained through source signals, policy rules, confidence indicators, timestamps, and evidence records.
- Auditability: Material workforce records, AI-assisted classifications, overrides, disputes, and administrative access should be logged and retained appropriately.
- Privacy by design: ZoikoTime should be deployed using the least intrusive configuration suitable for the Customer's legitimate purpose.
- No secret adverse automation: Customers must not use ZoikoTime to make hidden, solely automated, adverse workforce decisions where worker notice, consent, human review, consultation, or legal safeguards are required.
5. What ZoikoTime AI Does Not Do
Unless separately documented in writing, contracted for, and lawfully configured, ZoikoTime does not provide:
- Facial recognition, biometric identification, biometric verification, or biometric categorization
- Emotion recognition, sentiment-based employee scoring, psychological profiling, or mental- health inference
- Keystroke dynamics used for biometric identification
- Hidden camera activation, audio recording, private-message interception, or password capture
- Solely automated hiring, firing, promotion, demotion, compensation, scheduling, disciplinary, or termination decisions
- Legal advice, HR advice, tax advice, or a substitute for professional judgment
- Automated determination that a Worker committed misconduct, fraud, or wage-hour violation without human review
8. Prohibited and Restricted Uses
Customers must not use ZoikoTime AI or automated insights for any prohibited, unlawful, deceptive, discriminatory, or disproportionately intrusive purpose, including:
- Using AI outputs as the sole basis for adverse employment decisions without human review and due process
- Using AI outputs to infer protected characteristics or sensitive attributes without lawful basis and customer governance approval
- Manipulating, deceiving, intimidating, or unlawfully pressuring workers through AI-assisted outputs
- Training external AI models using ZoikoTime outputs or data without written authorization
- Creating unlawful profiling or discriminatory outcomes by combining ZoikoTime outputs with external data
9. Human Oversight and Decision Protocol
ZoikoTime AI and automated insights must be used under a human oversight model. Human review must be meaningful, documented, and capable of changing the outcome.
A reviewer must consider the underlying evidence, contextual factors, worker response, applicable policies, and any dispute or correction submitted by the Worker. Human review must not be a rubber-stamp exercise.
Where a Customer uses ZoikoTime outputs as part of a significant workforce decision, the Customer must maintain records showing the human reviewer, evidence considered, worker response or opportunity to respond, applicable policy, decision rationale, and any appeal or correction outcome.
12. Explainability, Confidence Scores, and Evidence
ZoikoTime AI features may provide explanations, supporting indicators, source references, confidence levels, or evidence links depending on feature availability and configuration.
Confidence scores and automated flags are not determinations of truth, misconduct, fraud, or legal non-compliance. They are signals for review.
Customers must train authorized reviewers to understand that a low-confidence, ambiguous, conflicting, or incomplete record requires additional context and must not be treated as conclusive.
13. Accuracy, Limitations, and Contestability
AI and automated systems may produce incomplete, inaccurate, delayed, context-poor, or misleading outputs. Work patterns may be affected by device settings, offline work, accessibility tools, network conditions, integration errors, leave records, role differences, meetings, client work, non-keyboard work, care responsibilities, disabilities, religious practices, local labor rules, or approved exceptions.
Customers must provide a practical route for Workers to contest, explain, or correct records that may affect them. Contestability should include a clear recipient, review timeframe, evidence review, decision record, correction mechanism, and escalation route where appropriate.
15. Data Use, Model Training, and Customer Data Protection
ZoikoTime does not claim ownership of Customer Data. Customer Data remains subject to the Agreement, Data Processing Addendum, Privacy Notice, and applicable law.
Unless expressly stated in a signed written agreement, ZoikoTime will not use identifiable Customer Data to train foundation models or general models for the benefit of other customers. ZoikoTime may use aggregated, de-identified, or statistical information to maintain, secure, operate, improve, benchmark, or develop the platform where permitted by the Agreement and applicable law, provided such information does not identify Customer, Workers, or individuals.
17. High-Risk and Regulated Deployment Conditions
Certain AI use cases in employment, worker management, workforce monitoring, automated decision support, or significant employment decisions may be regulated as high-risk in particular jurisdictions. Customers must assess whether their deployment triggers these requirements before enabling or relying on relevant features.
For EU deployments: assess the EU AI Act, GDPR, local labor law, and works council rules. For UK deployments: assess UK GDPR, Data Protection Act 2018, ICO employment practices guidance, and consultation rules. For US deployments: assess federal, state, city, employment, privacy, wage-hour, anti-discrimination, biometric, and automated decision tool requirements.
Contact ZoikoTime
For questions about this document or your legal rights:
- Email: sales@zoikotime.com
- Tel: 1-631-833-9395
- Toll-free: 1-800-484-5574