I have spent the better part of a decade working with PostgreSQL in production. Not the kind of PostgreSQL where you run a tutorial, create a users table, and call it a day — the kind where you wake up at 3am because replication lag just crossed 30 seconds and nobody knows why, or where a query that ran in 2ms for six months suddenly takes 400ms because the planner decided a sequential scan was a better idea after a routine ANALYZE.
Over those years, I tried every monitoring tool on the market. Some were genuinely good at showing me dashboards. Lots of colourful graphs. CPU usage, memory consumption, connections per second — the same metrics I could get from htop and a few pg_stat_* queries. What none of them did was tell me what to do. A graph tells you something changed. It does not tell you which index to create, which configuration parameter to adjust, or which long-running transaction is quietly marching you toward XID wraparound.
The tools that did offer recommendations were priced for enterprises with six-figure observability budgets. And the ones I could afford ignored the PostgreSQL ecosystem entirely. If you run pgvector, TimescaleDB, or PostGIS — and increasingly, most serious PostgreSQL deployments run at least one of these — your monitoring tool has nothing useful to say about them.
That frustration is why I built myDBA.dev.
The core philosophy is simple: monitoring should be actionable. Every metric shown should lead to a clear next step. If myDBA tells you something is wrong, it should also tell you how to fix it — ideally with a SQL script you can copy, paste, and run. Not a vague suggestion to “consider optimizing your queries.” An actual CREATE INDEX CONCURRENTLY statement with the right columns. An actual ALTER SYSTEM SET command with the recommended value. A ready-to-execute VACUUM command for the specific table that needs it.
myDBA is live today at myDBA.dev with a free tier that includes every feature in the product. No paywalled dashboards, no “upgrade to see EXPLAIN plans.” If you manage PostgreSQL databases and want to follow along, you can set it up in under five minutes and see everything I am about to describe working against your own data.
Let me walk you through every piece of it.
The Architecture
One of the first decisions I made — and one I have never regretted — was to build myDBA with no application server. All business logic lives in PostgreSQL itself, implemented as RPC functions called via PostgREST. The database is not just where the data lives; the database is the application.
This was a deliberate choice for several reasons. Fewer moving parts means fewer things that break at 3am. It means I am dog-fooding PostgreSQL itself as my application platform, which keeps me honest about the tool I am building. And it means the security model is PostgreSQL’s own Row Level Security — multi-tenant isolation enforced at the database layer, not in application code that someone might forget to add a WHERE clause to.
Authentication flows through Clerk, which issues JWTs that PostgREST validates via JWKS. RLS policies use helper functions to enforce org-scoping on every query. The result is that a user in Organization A cannot see Organization B’s data, period, no matter what PostgREST endpoint they call.
The collector is a lightweight Go binary with a three-tier collection model: Fast (every 15 seconds), Medium (every 60 seconds), and Slow (every 5 minutes). The thinking behind three tiers is about balancing granularity with overhead. Active sessions and locks change constantly — you need 15-second resolution to see what is happening right now. Index and table statistics shift more slowly — 60 seconds is plenty. And expensive operations like EXPLAIN plan collection should only run every 5 minutes to keep overhead negligible.
In practice, the collector uses under 1% CPU and roughly 50MB of RAM. You would not notice it running alongside your production workload.
The collector comes in two flavours. You can download the Go binary and run it on your own infrastructure — a single systemd service, nothing else required. Or you can use the managed collector option, where I host it for you on a cloud fleet. Zero infrastructure on your end, metrics start flowing in seconds. Both options matter because some teams have strict policies about what runs in their network, and others just want monitoring without any operational burden.
Query Intelligence
Query monitoring is the bread and butter of any database tool, and it is where most tools stop too early. myDBA starts with pg_stat_statements — execution time, call frequency, I/O consumption, rows returned — but that is just the foundation.
Statement Type Filtering
Every query is classified by statement type: SELECT, INSERT, UPDATE, DELETE, UTILITY. This sounds trivial, but it is surprisingly useful in practice. When a database is under write pressure, you want to instantly filter to just INSERT and UPDATE queries to find the culprits. When read latency spikes, you want to isolate SELECTs and see which ones are doing sequential scans on million-row tables. A simple filter pill that most tools do not bother to offer.
Automatic EXPLAIN Plan Collection
This is where things get interesting. Every 5-minute slow cycle, the collector automatically runs EXPLAIN on the top 50 slowest queries. On PostgreSQL 16 and above, it uses EXPLAIN (GENERIC_PLAN) which produces representative plans without requiring actual parameter values — safe and non-invasive. On older versions, it falls back to standard EXPLAIN.
Why does automatic collection matter? Because in my experience, most DBAs never run EXPLAIN until something is already on fire. A query degrades gradually over weeks as a table grows, or it suddenly falls off a cliff after a plan change, and by the time anyone thinks to run EXPLAIN, they are already in an incident. With automatic collection, you have a history of plans. You can see exactly when the plan changed and what it looked like before.
The visual rendering includes a tree view, a flame graph for time distribution, a timing waterfall, and a bottleneck bar that highlights the most expensive nodes. Each plan gets a performance grade with specific SQL recommendations — missing indexes, unnecessary sort operations, hash joins that should be merge joins, nested loops on large result sets.
EXPLAIN ANALYZE
For situations where estimates are not enough, myDBA supports EXPLAIN ANALYZE with a per-connection toggle. This actually executes the query and gives you real timing and row count data instead of the planner’s estimates. You enable it when you need it, disable it when you do not want the overhead.
Plan Regression Detection
Every EXPLAIN plan gets fingerprinted. When the fingerprint changes — meaning PostgreSQL has chosen a different plan — myDBA detects it and correlates the change with performance impact. This catches a scenario I have seen more times than I can count: a query runs fine for months using an index scan, then PostgreSQL runs ANALYZE, the statistics change slightly, the planner switches to a hash join, and suddenly that query is 50x slower.
Without plan regression detection, you are debugging blind. You know something got slow, but you do not know why or when it changed. With plan fingerprinting and history, you can see the exact moment the plan changed, compare the old and new plans side by side, and understand precisely what happened.
auto_explain Intelligence
If you already have auto_explain enabled on your PostgreSQL instance, myDBA detects it automatically, understands the plan data level being captured, and implements smart skipping to avoid collecting duplicate plans. It is aware of your existing setup and does not fight with it.
Query Sample Parameter Extraction
Normalized queries are useful for aggregation, but they hide the actual parameter values that cause problems. A query like SELECT * FROM orders WHERE customer_id = $1 AND created_at > $2 might be fast for most customers but catastrophically slow for one customer with 500,000 orders. myDBA’s log parser extracts actual parameter values from PostgreSQL logs, substitutes the $N placeholders, and generates executable sample queries. You can copy the exact query with the exact parameters that caused the slowdown and run it in your own psql session.
Function Monitoring
If you use PL/pgSQL functions, myDBA tracks call counts, average duration, and self-time for each function. This catches a common blind spot — a slow query might not be a slow query at all, but a PL/pgSQL function that calls dozens of queries in a loop.
Health Scoring and Actionable Diagnostics
myDBA runs 75+ automated health checks across 12 domains: Configuration, Performance, Vacuum, Replication, Indexes, Security, Storage, Connections, TimescaleDB, XID Wraparound, pgvector, and Wait Events.
Each finding gets a priority ranking from P1 (critical — your database is at risk) to P4 (informational — something to be aware of). But the priority ranking is not the important part. The important part is what comes with every finding.
Every single health check result includes a ready-to-run SQL fix script.
Not “you should consider increasing shared_buffers.” Instead: ALTER SYSTEM SET shared_buffers = '4GB'; SELECT pg_reload_conf(); with an explanation of why 4GB is appropriate for your specific server’s RAM.
Not “this table has bloat.” Instead: VACUUM (VERBOSE) schema.table_name; or, if the bloat is severe enough, a full pg_repack command with the right flags.
Not “you have unused indexes.” Instead: DROP INDEX CONCURRENTLY schema.index_name; with the estimated space savings and a note about which queries it was supposed to serve.
This is the philosophy I keep coming back to: if a tool can identify a problem, it can identify the fix. Making the human go Google the solution, read three Stack Overflow answers, and figure out which one applies to their PostgreSQL version and configuration is a waste of everyone’s time. The tool already has all the context it needs to generate the correct fix. So it should.
Health check scores are tracked over time, so you can see whether your database health is improving or degrading week over week. And if you are aware of an issue but have chosen to accept it — maybe you know that table has bloat but a VACUUM FULL would require downtime you cannot afford right now — you can suppress the finding so it does not clutter your dashboard.
The Cluster-Aware Index Advisor
This is one of the features I am most proud of, because it solves a problem I have personally watched cause production incidents more than once.
The index advisor detects unused indexes, duplicate indexes, overlapping indexes, and missing indexes based on sequential scan analysis. It calculates space-waste estimates so you can prioritize — an unused 50GB index is more urgent than an unused 50MB one. It generates ready-to-execute CREATE INDEX CONCURRENTLY and DROP INDEX CONCURRENTLY statements.
But the key innovation is cluster-awareness. The advisor aggregates index usage data across primaries AND replicas.
Here is the real-world problem this solves. A team looks at their primary’s pg_stat_user_indexes and sees that idx_orders_customer_region has zero scans. It looks unused. They drop it. Within minutes, their analytics service — which runs against the read replica — starts timing out. That index was serving every dashboard query in their reporting layer. They just never checked the replica.
This happens constantly. I have seen it at multiple companies. The primary and the replicas often run very different workloads. The primary handles OLTP writes and simple lookups. The replicas handle complex analytical queries, reporting dashboards, and data exports. An index can have zero usage on the primary and be critical on the replica.
myDBA’s index advisor checks usage across every node in your cluster before recommending a drop. If an index is unused on the primary but heavily used on a replica, the advisor will not recommend dropping it — it will tell you where it is being used and let you make an informed decision.
I have not found another monitoring tool that does this. Most tools only look at the primary.
Extension Monitoring — The Biggest Gap in the Market
PostgreSQL’s extension ecosystem is one of its greatest strengths. pgvector has turned PostgreSQL into a serious vector database. TimescaleDB makes it a time-series powerhouse. PostGIS is the gold standard for spatial data. pg_cron and pg_partman handle scheduling and partitioning. These extensions are not niche — they are running in production at thousands of companies.
And yet, monitoring tools pretend they do not exist. You get generic PostgreSQL metrics, and your extensions are a black box.
This is the gap I set out to close.
pgvector
The pgvector module provides a complete picture of your vector search infrastructure. It inventories every vector index — IVFFlat and HNSW — with dimensions, sizes, build parameters, and estimated recall. It analyses vector columns across your tables: storage impact, index coverage, and gaps where a vector column exists but has no index.
Query performance monitoring identifies similarity search patterns and tracks their execution time. A settings audit checks ivfflat.probes, hnsw.ef_search, and maintenance_work_mem against best practices and your data volume.
There are 13 dedicated health checks (V1 through V13) covering everything from insufficient probe counts to HNSW indexes with suboptimal m and ef_construction parameters.
Why does this matter? Because a misconfigured HNSW index can be 10x slower than it should be, and the only way you would know is by running benchmarks yourself. If your hnsw.ef_search is set to the default but your dataset has grown from 100,000 to 10 million rows, your recall has dropped and your search latency has increased, and nothing in standard PostgreSQL monitoring will tell you.
TimescaleDB
The TimescaleDB module monitors hypertable sizes, chunk counts, and compression ratios. It provides chunk-level visibility — per-chunk sizes, compression status, and time ranges — so you can see exactly where your storage is going.
Background job tracking shows you which TimescaleDB jobs are running, which are failing, and how long they take. Compression analytics compare before-and-after sizes so you can see the actual effectiveness of your compression policies.
Continuous aggregate monitoring tracks refresh lag and data freshness. If your hourly aggregate has not refreshed in 3 hours, myDBA will tell you before your dashboards start showing stale data.
The real problem this solves: uncompressed chunks silently waste enormous amounts of storage. I have seen TimescaleDB deployments where compression was enabled but the policy was misconfigured, and 90% of chunks were sitting uncompressed. The database was using 500GB of storage that should have been 50GB. Standard PostgreSQL monitoring shows you total database size and shrugs.
PostGIS
The PostGIS module inventories geometry and geography columns, analyses spatial indexes (GiST, BRIN, SP-GiST), and detects spatial query patterns. Geometry quality metrics check validity, size, vertex counts, and type distribution.
There are 5 anti-pattern detections — things like using ST_Distance without a spatial index, or storing coordinates as separate float columns instead of proper geometry types. Seven health checks (G13 through G19) cover spatial index effectiveness, geometry validity ratios, and common misconfigurations.
Here is something most people do not realize about PostGIS: invalid geometries cause silent query failures. A self-intersecting polygon will not throw an error in most spatial functions — it will just return wrong results. You will get incorrect area calculations, missed spatial joins, and boundary queries that miss records they should include. myDBA checks geometry validity across your spatial columns and tells you exactly which tables have problems and what percentage of their geometries are invalid.
Raster and topology monitoring round out the PostGIS coverage for teams using those capabilities.
pg_cron and pg_partman
The pg_cron module tracks job status, execution history, and success/failure rates. If your nightly aggregation job has been silently failing for a week, you will know.
The pg_partman module monitors partition configurations, creation history, and partition sizes. It catches the common problem of partitions growing unevenly or partition maintenance jobs not running.
The Gap This Fills
I want to be direct about this: I have not found another PostgreSQL monitoring tool that offers dedicated monitoring for any of these extensions. Not pgvector, not TimescaleDB, not PostGIS. Some tools will show you that the extension is installed. None of them will tell you that your HNSW index has suboptimal build parameters, that your TimescaleDB compression is not working, or that 15% of your geometries are invalid. This is the single biggest gap in the PostgreSQL monitoring market, and filling it is a core part of what myDBA does.
Real-Time Diagnostics
Activity Monitor
The activity monitor shows all active connections in real time — current queries, session states, wait events, and transaction duration. It is the equivalent of staring at pg_stat_activity in psql, but with filtering, sorting, and drill-through to query details.
Lock Chain Visualization
A flat list of locks is useless for debugging contention. What you need is the chain. Session A is blocked by Session B, which is blocked by Session C, which is holding an AccessExclusiveLock because someone ran ALTER TABLE without thinking about it. myDBA builds full blocking dependency graphs with root cause identification. You can see the entire chain and go straight to the root blocker instead of playing detective through pg_locks output.
Wait Event Heatmaps
Wait events tell you what PostgreSQL is actually spending its time on. myDBA breaks them down by category — I/O waits, lock waits, CPU, internal waits — and shows them as heatmaps over time with drill-through to the contributing queries. When your database feels slow but CPU looks fine, wait event analysis usually reveals the answer: you are I/O bound, or you have lock contention that is serializing your workload.
Interactive Topology
For teams managing replication setups, myDBA renders interactive topology diagrams showing primaries, replicas, and cascading standbys. Replication lag, WAL positions, and connection status are all visible at a glance. When you manage a primary with three read replicas and a cascading standby off one of them, a visual diagram is worth a thousand rows of pg_stat_replication output.
WAL and Vacuum Monitoring
WAL monitoring tracks generation rate, archive status, and checkpoint timing. Vacuum worker timelines show dead tuple ratios and per-table autovacuum progress, so you can see which tables are being vacuumed, how fast the dead tuple count is growing, and whether autovacuum is keeping up.
XID Wraparound Tooling
Transaction ID wraparound is the nightmare scenario in PostgreSQL. When the transaction ID counter approaches its 2-billion limit, PostgreSQL forces itself into single-user mode to prevent data corruption. Your database is effectively down. No reads, no writes, nothing — until you manually run a vacuum that can take hours on large tables.
The insidious part is that it sneaks up on you. Transaction age grows slowly, day by day, and everything looks fine until it does not. A long-running transaction that has been open for a week can prevent the vacuum from advancing the transaction ID horizon, and you will not notice until you are in the danger zone.
myDBA monitors transaction age continuously, detects blockers — long-running transactions, abandoned prepared transactions, replication slots preventing cleanup — and generates recovery scripts. A runbook wizard walks you through the recovery process step by step if you are already in trouble.
Most monitoring tools will show you the current XID age as a number on a dashboard. myDBA tells you why the age is high, what is blocking the vacuum from making progress, and exactly what to run to fix it.
Schema Diff
Schema drift between environments is one of those problems that everyone has and nobody talks about until a deployment fails. myDBA’s schema diff lets you compare schemas across any two database connections — staging vs production, primary vs replica, this month’s snapshot vs last month’s.
It auto-generates the exact ALTER statements needed to reconcile the differences. Missing column? Here is the ALTER TABLE ADD COLUMN. Different default value? Here is the ALTER COLUMN SET DEFAULT. Index exists in one environment but not the other? Here is the CREATE INDEX CONCURRENTLY statement.
It is one of those features that you do not think you need until you use it once and wonder how you ever lived without it.
Alerting
myDBA includes configurable threshold-based alerting with severity levels. You set conditions — replication lag exceeds 30 seconds, connection count exceeds 80% of max_connections, a health check degrades below a threshold — and myDBA notifies you via email or Slack.
Cooldown periods prevent alert storms. If your replication lag is flapping between 29 and 31 seconds, you do not want 200 notifications in an hour. Alert history gives you a timeline of every alert that fired, and digest rules with false-positive exclusion let you tune the signal-to-noise ratio over time.
Alerting is included in the free tier. Because knowing that something is wrong should not be a premium feature.
The 12 Free Developer Tools
Alongside the monitoring platform, myDBA includes 12 standalone developer tools that run entirely in your browser. No signup required, no data leaves your machine. These are genuinely useful PostgreSQL utilities, not marketing gimmicks.
1. EXPLAIN Plan Visualizer — Paste any EXPLAIN output and get an interactive tree view, flame graph, bottleneck analysis, and specific SQL recommendations for improving the plan. This is the tool I wanted every time I was staring at EXPLAIN output in psql trying to figure out which node was the problem.
2. PostgreSQL Config Tuner — Enter your server specs (RAM, CPU cores, storage type, workload profile) and get an optimized postgresql.conf with every setting explained. Based on established tuning heuristics and PostgreSQL documentation.
3. SQL Formatter — Paste messy SQL and get it formatted consistently with customizable indentation, keyword casing, and line breaking. Useful for cleaning up queries pulled from logs or pg_stat_statements.
4. Connection String Builder — Parse and convert PostgreSQL connection strings across 6 formats: URI, keyword/value, JDBC, psql command, pgAdmin, and environment variables. Because every tool seems to want a different format.
5. Index Advisor — Paste a query and get heuristic index recommendations based on WHERE clauses, JOIN conditions, and ORDER BY columns. It is not a full workload analyser like the platform’s index advisor, but it is useful for quick checks.
6. pg_hba.conf Generator — A visual form to build PostgreSQL authentication rules. Generates correct pg_hba.conf entries instead of making you remember the syntax and hope you got the CIDR notation right.
7. Table Bloat Estimator — Estimate table and index bloat with VACUUM recommendations. Paste your table statistics and get an estimate of how much space is wasted.
8. Lock Chain Visualizer — Paste pg_locks query output and see the blocking dependency chain visualized as a graph. Find the root blocker without mentally tracing through rows of lock data.
9. Migration Linter — Paste a SQL migration and get warnings about unsafe operations: missing CONCURRENTLY on index creation, ALTER TABLE with ACCESS EXCLUSIVE lock implications, adding columns with volatile defaults, and other patterns that cause downtime.
10. RLS Policy Builder — Generate Row Level Security policies from templates. Select your access pattern (user sees own rows, org-scoped access, role-based access) and get correct CREATE POLICY statements.
11. Data Type Picker — Browse PostgreSQL data types with storage sizes, value ranges, and usage recommendations. Useful when you are designing a schema and want to pick the right type without checking the docs.
12. Cron Expression Builder — A visual pg_cron expression builder with a preview of the next scheduled runs. Because cron syntax is a write-only language.
The philosophy behind these tools is straightforward: give value to the PostgreSQL community even if they never sign up for the monitoring product. If a developer uses the EXPLAIN visualizer and it helps them optimize a query, that is a win regardless of whether they ever create a myDBA account.
Pricing and the Free Tier Philosophy
The free tier includes every feature in myDBA. Every health check, the full index advisor, all extension monitoring, EXPLAIN plan collection, alerting, schema diff — everything. The limits are infrastructure-scoped: 1 primary + 1 replica, 1 user seat, and 7-day metric retention.
Pro pricing is straightforward: 50 GBP per primary per month and 25 GBP per replica per month. Unlimited connections, unlimited users, 30-day data retention.
My philosophy on free tiers is that they should be genuinely useful, not a crippled demo that exists solely to frustrate you into upgrading. If a solo developer or a small team with one PostgreSQL server can run their database better with myDBA’s free plan, that is a good outcome. They will tell other developers about it. Some of those developers work at companies with 20 PostgreSQL servers, and those companies will need Pro.
The worst thing a free tier can do is waste someone’s time. If you sign up, set up the collector, wait for metrics to flow in, and then discover that the features you actually need are behind a paywall — that is disrespectful of your time. I would rather give you the full product and have you outgrow the infrastructure limits naturally.
What’s Next
I have a long list of features I want to build, but I am intentional about not over-promising a roadmap. The direction is deeper intelligence — more automated analysis, more actionable recommendations, better coverage of the PostgreSQL ecosystem as it evolves.
What I am most interested in right now is feedback from the community. Which features are most valuable to you? What pain points does myDBA not address yet? What would make you switch from your current monitoring setup?
If you want to try it, head to myDBA.dev. The free tier takes about five minutes to set up — download the collector binary, point it at your PostgreSQL instance, and metrics start flowing immediately.
PostgreSQL is the most capable open-source database in the world. It has earned monitoring tools that understand it deeply — not generic APM dashboards with a PostgreSQL plugin bolted on as an afterthought. That is what I am building with myDBA.dev, and I think the PostgreSQL community deserves nothing less.