CheckyWorky
Use CasesIntegrationsPricingGuides
Log inStart free

Terms of service

Last updated: February 2026

Account terms

You must provide accurate information when creating an account. You are responsible for maintaining the security of your account credentials. One person or entity per account.

Acceptable use

Use CheckyWorky to monitor your own products and services. Do not use the service to monitor third-party services without their permission, perform denial-of-service attacks, or violate any applicable laws.

Payment and billing

Paid plans are billed monthly. You can upgrade, downgrade, or cancel at any time. Changes take effect at the next billing cycle. Refunds are handled on a case-by-case basis.

Service availability

We aim for high availability but do not guarantee 100% uptime. Scheduled maintenance windows will be communicated in advance when possible.

Liability limitations

CheckyWorky is provided “as is.” We are not liable for indirect, incidental, or consequential damages. Our total liability is limited to the amount you paid in the 12 months preceding the claim.

Termination

Either party can terminate at any time. Upon termination, your data will be retained for 30 days before deletion unless you request immediate deletion.

Governing law

These terms are governed by the laws of Australia. Disputes will be resolved through binding arbitration or the courts of New South Wales, Australia.

Contact

For questions about these terms, email legal@checkyworky.com.

By the numbers

The average cost of downtime is about $5,600 per minute (often-cited benchmark).

Gartner (widely cited estimate) (2014)

The cost of data breaches reached an all-time high, with the global average cost at $4.88 million.

IBM, Cost of a Data Breach Report (2024)

A large share of customers will switch to a competitor after more than one bad experience.

Zendesk, Customer Experience Trends (widely cited CX finding) (2023)

Outages and incident response remain a top operational risk area as organizations increase reliance on third-party and cloud services.

Google Cloud, Accelerate / DORA research (software delivery & reliability findings) (2023)

Real-world examples

Missed alert due to third‑party login change (SSO flow updated)

Scenario: A team monitors a critical “log in → open dashboard” workflow. The identity provider updates its login page DOM and introduces a new consent screen, causing the synthetic script to fail intermittently. Alerts arrive late because retries are enabled and the check times out.

Outcome: Team updates the script to use stable selectors and adds a dedicated SSO health check. False positives drop by ~80% and mean time to detect real login outages improves from ~15 minutes to ~5 minutes.

Usage overage surprise during an incident (frequency temporarily increased)

Scenario: During a production incident, engineers increase check frequency from every 5 minutes to every 1 minute across multiple locations. The billing cycle ends with unexpected overage charges because run limits were exceeded.

Outcome: They add budget alerts and a run-cap policy, and create an “incident mode” playbook with a time-boxed frequency increase. Next incident stays within budget while still improving visibility (detection in ~1–2 minutes).

Acceptable use violation avoided (high-frequency scraping-like checks)

Scenario: A founder sets a synthetic check to hit a third-party pricing page every 10 seconds to detect changes. The target site rate-limits and serves bot challenges, which also breaks legitimate monitoring.

Outcome: They replace it with a daily check plus vendor-provided APIs/feeds where available. Bot challenges disappear, check reliability increases, and they avoid risking account suspension for abusive traffic.

Cancellation with data export (audit and postmortems)

Scenario: A small SaaS team migrates tooling and needs to retain 90 days of screenshots and alert history for compliance and postmortems. Without a clear export plan, they risk losing artifacts after cancellation.

Outcome: They export run history and incident timelines before canceling and document retention requirements in their internal policy. Postmortems remain complete and audit requests can be satisfied without re-accessing the old vendor.

Key insights

1.

Terms for monitoring tools usually disclaim warranties and limit liability—plan your reliability program assuming alerts can be delayed or missed (layer synthetics with metrics, logs, and error tracking).

2.

Acceptable use is a real operational constraint for synthetic monitoring: high-frequency UI checks can look like scraping or abuse and may trigger rate limits or bot defenses, especially on third-party SaaS pages.

3.

Credential handling is both a security and legal issue: using scoped tokens, least-privilege “monitoring-only” accounts, and rotation reduces the blast radius if credentials are exposed.

4.

Billing risk often comes from incident-driven behavior (temporarily increasing frequency/locations). Usage caps and budget alerts prevent surprise invoices while keeping fast detection.

5.

Cancellation and retention clauses matter for small teams: screenshots, HAR files, and alert timelines are valuable for postmortems and audits—exportability and retention windows should be explicit.

6.

Third-party SaaS monitoring is common but fragile: UI flows change, MFA/SSO adds complexity, and bot protections can break checks—API checks and vendor-supported health endpoints are often more stable.

7.

Security and privacy obligations (e.g., breach costs and compliance expectations) make it important that terms clearly define data processing, retention, and responsibilities for secrets and personal data.

Pro tips

💡

Create a dedicated “monitoring-only” user/API token per environment (prod/staging) with least privilege, and rotate it on a schedule (e.g., quarterly) or after staff changes.

💡

Add a budget guardrail: set alerts at 70/90% of run limits and document an “incident mode” that increases frequency for a fixed time window (e.g., 60 minutes) to avoid surprise overages.

💡

Design checks to be stable and compliant: prefer API checks for third-party SaaS where possible, throttle UI checks (e.g., 5–10 minutes), and use resilient selectors (data-testids) to reduce false alarms after UI changes.

How CheckyWorky compares

vs Datadog Synthetics

Powerful enterprise-grade synthetics integrated with APM/logs, but can be costlier and more complex for small teams. CheckyWorky can differentiate with simpler “pretend customer” workflows, lightweight setup, and clearer small-team-friendly billing/usage controls.

vs Checkly

Developer-centric, code-first checks (Playwright) with strong CI/CD integration. CheckyWorky can differentiate by focusing on fast, guided setup and non-expert-friendly maintenance for common SaaS flows (SSO, MFA workarounds, stable selectors, run artifacts).

vs UptimeRobot

Great for basic endpoint uptime at low cost, but limited for full browser workflows and multi-step customer journeys. CheckyWorky can differentiate with true end-to-end synthetic journeys (login, checkout, billing portal) and richer artifacts (screenshots/step traces) for debugging.