Dev / IT5 min read

Unix Timestamps Explained: Epoch Time, Seconds vs Milliseconds

Learn what Unix timestamps are, how epoch time works, the difference between seconds and milliseconds timestamps, and how to convert them in any programming language.

What is a Unix Timestamp?

A Unix timestamp (also called epoch time or POSIX time) is the number of seconds that have elapsed since January 1, 1970, at 00:00:00 UTC. This reference point is called the Unix epoch.

Unix timestamps are timezone-independent — they represent an absolute point in time, not a local time. This makes them ideal for storing and comparing times across different time zones.

Seconds vs Milliseconds

SecondsMilliseconds
Example value17040672001704067200000
Digits1013
Precision1 second1/1000 second
Used inUnix/Linux, most APIsJavaScript, Java, high-precision timing

Converting Timestamps

// JavaScript
const now = Date.now()              // milliseconds
const seconds = Math.floor(Date.now() / 1000)  // seconds
const date = new Date(1704067200000)  // from ms timestamp

# Python
import time, datetime
now = int(time.time())              # seconds
dt = datetime.datetime.fromtimestamp(1704067200)

// PHP
$now = time();                       // seconds
$date = date('Y-m-d', 1704067200);

The Year 2038 Problem

32-bit signed integers can store values up to 2,147,483,647. Unix timestamps will overflow this limit on January 19, 2038 at 03:14:07 UTC. Modern 64-bit systems are not affected — a 64-bit timestamp won't overflow for approximately 292 billion years.

✓ Always use 64-bit integers for timestamps in new applications. Most modern languages and databases default to 64-bit, but legacy embedded systems and older databases may still use 32-bit.

TRY THE FREE TOOL

Unix Timestamp Converter

Convert Unix timestamps to human-readable dates

Open Tool →
← Back to all articles