Privacy & Data Security Statement
Effective Date: 23-May-2025
Last Updated: 20-July-2025
At ComplyIQ360, we are committed to protecting your data privacy and maintaining robust data security standards. This statement outlines how we collect, process, store, and protect your data when you use our services, including the handling of files submitted for compliance checking and analysis.
📂What Data We Collect
- Files uploaded by users for standard and report validation
- Associated metadata (e.g., filename, MIME type, byte size, upload timestamp)
- Technical usage logs (e.g., timestamps, request IDs)
- Access audit records (who accessed which file, action taken, timestamp, service context/IP)
We do not access or store any personal or sensitive information beyond what is submitted explicitly via file uploads or user inputs. We recommend not loading any personal information into the app unless it is strictly required for your own compliance objectives.
🔐How We Protect Your Data
- Regional data residency: All persistent customer files and generated compliance outputs are stored solely in AWS Sydney (ap-southeast-2).
- Secure uploads via HTTPS: File uploads occur through pre-signed S3 URLs, ensuring encrypted transmission over TLS.
- Least-privilege & path isolation: Files are stored in isolated, job-specific paths. Access is restricted to the processing functions required to complete the requested analysis.
- Comprehensive audit logging: Every access event (upload, internal read, model processing preparation, deletion) is logged with a timestamp, actor/service identity and action type.
- Retention under user control: Uploaded files remain stored until you explicitly delete them via the application or request deletion.
🤖Third-Party AI Processing (OpenAI, Google Gemini & Anthropic Claude)
To check and validate files against compliance standards we may (depending on the selected workflow) transmit the raw file content (or faithful textual representations thereof) to the paid APIs of OpenAI, Google Gemini and Anthropic Claude for inference. Interactions with these generative AI platforms are transient: we do not persist full prompt/response transcripts beyond what is necessary to assemble and present your compliance result.
- No model training on your data: We use paid API tiers configured so that provider policies state prompts/outputs are not used to train base models (except where an explicit enterprise opt‑in would be separately documented to you).
- Abuse / security retention windows (provider-side):
- OpenAI API: Inputs/outputs may be retained by OpenAI for up to 30 days for abuse monitoring (or shorter / zero under approved enterprise Zero Data Retention arrangements) and are then deleted.
- Google Gemini (paid): Inputs/outputs from paid Gemini API calls are not used to train Google foundation models and are retained only temporarily for abuse detection, reliability and legal obligations before routine deletion.
- Anthropic Claude API: Inputs/outputs are retained for a limited window (typically up to 30 days) for trust & safety operations; enterprise plans can configure shorter or custom retention. Not used to train models unless explicitly opted in.
- Geographic processing: While your persistent storage remains in AWS Sydney, ephemeral processing by third-party AI vendors may occur on their global infrastructure (subject to each provider’s data protection and regional controls).
- Minimisation & scoping: We only submit content required for the specific compliance analysis you initiate. We do not transmit unrelated stored assets, audit logs, access credentials, or encryption keys.
If a provider’s legal obligations (e.g., litigation preservation orders) temporarily extend their retention timelines, those changes do not alter our own deletion schedule for data we control.
🌍Data Location
Persistent storage: All customer-uploaded files and generated compliance outputs are stored exclusively in AWS Sydney (ap-southeast-2).
Transient processing: Raw file content provided for AI inference may be processed temporarily by OpenAI, Gemini and Anthropic systems in other regions strictly for model inference, abuse detection, and required logging; it is not persisted by us outside Australia.
⏱️Retention & Deletion
- User-initiated deletion: When you delete a file (or request deletion) the primary object is queued for secure removal from S3. Deletions propagate through internal caches/process queues.
- Post-deletion residual metadata: After file deletion we retain only minimal operational metadata (job ID, anonymised or hashed file name, size, MIME type, processing timestamps, model usage metrics, and audit log references) for billing, security forensics and capacity planning. This metadata is decoupled from file contents and contains no recoverable original content.
- Audit logs: Access audit entries (without file content) are retained up to 12 months, then purged or anonymised.
- Third-party AI retention: See the transient provider retention windows above; we cannot accelerate deletion inside those providers beyond available enterprise controls.
📜Your Rights and Choices
You may request:
- Deletion of your uploaded content
- An access summary (audit log) of actions taken on your files
- Clarification about data processing & regional handling
Please contact chris.thompson@fcthree.com.au for any privacy-related concerns or to exercise your rights.