The fastest way to misread a timestamp is to assume the wrong unit. Seconds and milliseconds often look similar at a glance, but they produce completely different dates when converted.
Why the distinction matters
A timestamp in seconds can look valid until it is interpreted as milliseconds, and the opposite mistake can send you decades off target. That can waste time when you are tracing expiry times, event ordering, or delayed jobs.
You do not need a complex rule to catch this. A quick length check and sanity check are usually enough.
Quick ways to tell
Modern timestamps in seconds are usually around 10 digits long. Millisecond values are usually around 13 digits long.
If the converted date lands around 1970 or far beyond the current year, assume the unit is wrong and test the other one.
- 10 digits usually means seconds.
- 13 digits usually means milliseconds.
- Always sanity-check the resulting year.
Why this matters in real workflows
Mixed units are common when frontend code, backend code, analytics systems, and token libraries all touch the same data. One service may emit seconds while another expects milliseconds.
A converter that shows both interpretations quickly can save a surprising amount of debugging time.