CheckyWorky
Use CasesIntegrationsPricingGuides
Log inStart free

Security, plainly explained

Clear, practical answers about how CheckyWorky handles your data.
Start free

What data CheckyWorky stores

Account info: email, team name, and billing details.
Check definitions: workflow steps, assertions, and scheduling config.
Evidence: screenshots, console logs, and run metadata (retained per your plan's data retention policy).

How credentials are handled

Test account credentials are encrypted at rest and in transit. We recommend using dedicated test accounts with least-privilege access. Secrets are stored using industry-standard vault infrastructure and are never exposed in logs or alerts.

Access controls

Team roles: owner, admin, and member. Owners manage billing and team access. Admins can create and edit checks. Members can view runs and results.

Data retention

Default retention depends on your plan (7\u201390 days). Failure evidence is retained longer for investigation. You can request data deletion at any time.

Compliance

We're working toward SOC 2 Type II compliance. In the meantime, we follow industry best practices for data protection, encryption, and access controls.

Responsible disclosure

Found a security issue? Email security@checkyworky.com and we'll respond within 48 hours. We appreciate responsible disclosure and will work with you to resolve issues quickly.

Questions? Contact us and we'll give you a straight answer.

By the numbers

The average total cost of a data breach was $4.88 million, the highest on record.

IBM, Cost of a Data Breach Report (2024)

Organizations with higher levels of security automation and AI reported substantially lower breach costs (often cited as savings on the order of millions compared to low automation).

IBM, Cost of a Data Breach Report (2024)

Credential compromise remains one of the most common initial access vectors observed in real-world incidents.

Verizon, Data Breach Investigations Report (DBIR) (2024)

Misconfigurations and human error continue to be frequent contributors to cloud security incidents, reinforcing the need for least privilege and strong change controls.

Palo Alto Networks Unit 42, Cloud Threat Report (2024)

Real-world examples

Preventing a leaked secret via screenshot redaction

Scenario: A small SaaS team monitors an onboarding flow that includes a one-time invite link and an API key shown after signup. Their synthetic tool captures screenshots for every failure and posts them to Slack.

Outcome: By enabling field masking/redaction for sensitive UI elements and limiting Slack alerts to error summaries (no screenshots by default), the team reduced accidental secret exposure risk while keeping incident response fast. Mean time to identify (MTTI) stayed under 10 minutes without sharing sensitive artifacts broadly.

Least-privilege test accounts for production checks

Scenario: Checks log into production to validate billing and account settings pages. Initially the team used an admin account shared across engineers and the monitoring tool.

Outcome: They switched to a dedicated monitoring user with read-only permissions and a restricted role. When an engineer’s laptop was later compromised, the monitoring account could not be used to change billing settings. The blast radius was limited and no customer-impacting changes occurred.

Short-lived tokens to avoid long-lived password storage

Scenario: A B2B app supports OAuth for internal tools. The team’s synthetic checks originally stored a static password that was rotated quarterly.

Outcome: They moved to scoped OAuth tokens with shorter lifetimes and automated rotation. Password rotation work dropped to near zero, and the window of exposure from a leaked token was reduced from months to hours.

Retention tuning to reduce sensitive artifact exposure

Scenario: During incident review, the team realized that video replays of failed check runs sometimes included customer names and partial addresses in UI tables.

Outcome: They reduced artifact retention from 90 days to 14 days, kept only aggregated uptime metrics for 12 months, and added automatic deletion for runs tagged as containing sensitive data. This lowered long-term exposure while preserving trend reporting.

Key insights

1.

Synthetic monitoring security is mostly about secrets hygiene: dedicated test identities, least privilege, rotation, and avoiding long-lived shared credentials.

2.

Artifacts (screenshots, videos, HAR files) are a top real-world leakage path—treat them like production logs and apply masking, redaction, and restricted sharing by default.

3.

Data minimization is a competitive advantage: retain only what helps debug incidents, and keep sensitive artifacts on shorter retention than aggregate metrics.

4.

Clear answers beat vague promises on security pages: teams want specifics on encryption, access controls, retention, sub-processors, and incident response timelines.

5.

Support for private runners (or controlled egress IPs) is often the deciding factor for teams with allowlisting, regulated data, or strict network boundaries.

6.

Compliance is not just certificates—small teams need practical documents: DPA, sub-processor list, security overview, and a responsible disclosure process.

7.

Credential compromise remains a common breach driver, so monitoring vendors should design for the assumption that secrets will eventually be targeted.

Pro tips

💡

Create a dedicated monitoring user in every app you test (prod and staging). Give it the minimum permissions needed for the journey, and name it clearly (e.g., monitor@yourcompany.com) so audit logs are easy to interpret.

💡

Turn on masking/redaction before you turn on screenshots/videos in alerts. Keep Slack/email/PagerDuty payloads to: check name, step, error message, region, and a link—avoid attaching artifacts by default.

💡

Set two retention windows: short for artifacts (e.g., 7–14 days) and longer for aggregated uptime metrics (e.g., 12 months). Review quarterly to ensure you’re not keeping sensitive UI data longer than necessary.

How CheckyWorky compares

vs Datadog Synthetics

Powerful enterprise platform with deep observability integration, but can be heavier to configure and govern for small teams. CheckyWorky’s differentiation is a simpler, ‘plainly explained’ security posture and smaller-team workflows (least-privilege test accounts, masking defaults, and straightforward retention controls).

vs Checkly

Developer-focused synthetic monitoring with strong CI/CD and code-first checks. CheckyWorky can differentiate by emphasizing guided, non-expert-friendly credential handling, safer artifact defaults (redaction + minimal alert payloads), and clearer security documentation for teams without a dedicated security engineer.

vs UptimeRobot

Great for basic uptime checks, but not designed for authenticated multi-step ‘pretend customer’ journeys with sensitive credentials and artifacts. CheckyWorky focuses on end-to-end logged-in flows and the security controls required to run them responsibly.

Start free with confidence.

Start free