100+ Linux Commands Every Developer Should Know
Whether you are SSH'd into a production server at 2 AM or writing a deployment script, knowing the right Linux command saves you. This is a practical, organized reference covering file operations, text processing, networking, permissions, processes, and more — all with real-world examples.
File and Directory Operations
These are the commands you will run dozens of times per day. Mastering them means never having to slow down to think about basic navigation.
# List files
ls # basic listing
ls -la # long format + hidden files
ls -lh # human-readable file sizes (K, M, G)
ls -lt # sort by modification time (newest first)
ls -lS # sort by size (largest first)
ls --color=auto # colorized output
# Navigate
cd /var/log # absolute path
cd ../.. # two directories up
cd - # go back to previous directory
cd ~ # go to home directory
pwd # print working directory
# Create
mkdir -p /opt/app/config # create nested directories
touch app.log # create empty file / update timestamp
touch file{1,2,3}.txt # create file1.txt, file2.txt, file3.txt (brace expansion)
# Copy
cp file.txt backup.txt # copy file
cp -r /src /dest # copy directory recursively
cp -p file.txt backup.txt # preserve permissions and timestamps
cp -u *.txt /backup/ # copy only if source is newer
# Move and Rename
mv old.txt new.txt # rename file
mv /tmp/app/ /opt/ # move directory
mv *.log /var/archive/ # move multiple files with glob
# Delete
rm file.txt # delete file
rm -f file.txt # force delete (no confirmation)
rm -rf /tmp/old-deploy/ # delete directory recursively (dangerous — double-check path)
rmdir empty-dir/ # delete empty directory only
# View files
cat file.txt # print file contents
less file.txt # paginated viewer (q to quit, / to search)
head -20 file.txt # first 20 lines
tail -50 file.txt # last 50 lines
tail -f /var/log/nginx/access.log # follow log in real time
Finding Files
# find — the most powerful file search tool
find /var/www -name "*.php" # find all PHP files
find /tmp -name "*.log" -mtime +7 # log files older than 7 days
find / -user www-data -type f # files owned by www-data
find . -size +100M # files larger than 100MB
find /etc -name "*.conf" -exec cat {} \; # find and exec a command on each result
find . -name "*.bak" -delete # find and delete all .bak files
# locate — fast but uses a database (update with: sudo updatedb)
locate nginx.conf
# which / whereis — find binary location
which python3 # /usr/bin/python3
whereis nginx # nginx: /usr/sbin/nginx /etc/nginx
Text Processing: grep, sed, awk
These three tools are the backbone of log analysis, data transformation, and scripting. Investing time to learn them properly pays off every week.
grep — search text
grep "error" app.log # basic search
grep -i "error" app.log # case-insensitive
grep -r "TODO" ./src/ # recursive search through directory
grep -n "Exception" app.log # show line numbers
grep -c "404" access.log # count matching lines
grep -v "DEBUG" app.log # invert match (exclude DEBUG lines)
grep -l "password" /etc/ # list files containing match
grep -A 3 "FATAL" app.log # show 3 lines AFTER each match
grep -B 3 "FATAL" app.log # show 3 lines BEFORE
grep -E "error|warn|fatal" app.log # extended regex (OR pattern)
grep -P "^\d{4}-\d{2}" app.log # Perl-compatible regex
zgrep "error" app.log.gz # grep inside gzipped files
sed — stream editor
sed 's/foo/bar/' file.txt # replace first occurrence per line
sed 's/foo/bar/g' file.txt # replace ALL occurrences
sed 's/foo/bar/gi' file.txt # case-insensitive replace all
sed -i 's/http:/https:/g' config.txt # in-place edit (modifies file)
sed -i.bak 's/old/new/g' config.txt # in-place with .bak backup
sed -n '10,20p' file.txt # print only lines 10-20
sed '/^#/d' config.txt # delete comment lines (lines starting with #)
sed '/^$/d' file.txt # delete blank lines
sed 's/[[:space:]]*$//' file.txt # strip trailing whitespace
awk — field processor and pattern matching
awk '{print $1}' file.txt # print first column (whitespace-delimited)
awk '{print $1, $3}' file.txt # print columns 1 and 3
awk -F: '{print $1}' /etc/passwd # use : as delimiter, print first field (usernames)
awk '{sum += $2} END {print sum}' data.txt # sum column 2
awk '$3 > 1000' data.txt # print rows where column 3 > 1000
awk '/ERROR/ {print $0}' app.log # print lines matching ERROR
awk 'NR==5' file.txt # print line number 5
awk 'NR>=10 && NR<=20' file.txt # print lines 10-20
# Nginx log: print IPs that hit 404
awk '$9 == 404 {print $1}' access.log | sort | uniq -c | sort -rn | head
Other text tools
cut -d: -f1 /etc/passwd # cut field 1 with : delimiter
cut -c1-10 file.txt # cut first 10 characters per line
sort file.txt # sort alphabetically
sort -n numbers.txt # sort numerically
sort -rn numbers.txt # reverse numeric sort
sort -k2 data.txt # sort by column 2
sort -u file.txt # sort and remove duplicates
uniq file.txt # remove consecutive duplicates
uniq -c file.txt # prefix each line with count
uniq -d file.txt # show only duplicate lines
wc -l file.txt # count lines
wc -w file.txt # count words
wc -c file.txt # count bytes
tr 'a-z' 'A-Z' < file.txt # convert lowercase to uppercase
tr -d '\r' < windows.txt > unix.txt # strip Windows carriage returns
tr -s ' ' < file.txt # squeeze multiple spaces into one
diff file1.txt file2.txt # show differences between two files
diff -u file1.txt file2.txt # unified diff format (used in patches)
# Combine tools in pipelines
cat access.log | grep "POST" | awk '{print $7}' | sort | uniq -c | sort -rn | head -20
File Permissions
Understanding Linux permissions is essential for both security and debugging "permission denied" errors. Every file has three permission sets: owner, group, and others. Each set has read (r=4), write (w=2), and execute (x=1).
Calculate Permissions Instantly
Not sure what numeric value to pass to chmod? Use our Chmod Calculator to convert between symbolic (rwxr-xr-x) and numeric (755) notation — and understand exactly what each permission grants.
# chmod — change permissions
chmod 755 script.sh # rwxr-xr-x (owner: all, group/others: read+exec)
chmod 644 config.php # rw-r--r-- (owner: read+write, others: read only)
chmod 600 ~/.ssh/id_rsa # rw------- (private key — owner only)
chmod 700 ~/.ssh/ # rwx------ (directory — owner only)
chmod +x deploy.sh # add execute for all
chmod -w important.conf # remove write for all
chmod u+x,g-w,o-r file.sh # symbolic: add exec for user, remove write for group, remove read for others
chmod -R 755 /var/www/html/ # recursive
# chown — change ownership
chown www-data:www-data /var/www/html # owner and group
chown -R ubuntu:ubuntu /opt/app/ # recursive
chown root /etc/cron.d/myjob # change owner only
# chgrp — change group
chgrp developers /opt/app/config.py
# Special permissions
chmod +s /usr/bin/mytool # setuid: runs as file owner
chmod g+s /shared/dir/ # setgid: new files inherit group
chmod +t /tmp/ # sticky bit: only owner can delete files
# View permissions
ls -la # see permissions in listing
stat file.txt # detailed file metadata including permissions
namei -l /var/www/html/index.php # show permissions for every dir in path (debug)
System Information
# CPU and memory
top # interactive process viewer
htop # better interactive viewer (install: apt install htop)
uptime # load averages for 1, 5, 15 minutes
nproc # number of CPU cores
cat /proc/cpuinfo # detailed CPU information
free -h # memory usage (human-readable)
vmstat 1 # virtual memory stats, updated every 1s
# Disk
df -h # disk usage per filesystem
df -ih # inode usage (important when disk "full" but space free)
du -sh /var/log/ # size of a directory
du -sh /* 2>/dev/null | sort -rh | head -20 # find biggest top-level directories
lsblk # list block devices (disks, partitions)
fdisk -l # list disk partitions (needs root)
mount # show all mounted filesystems
mount /dev/sdb1 /mnt/data # mount a disk
umount /mnt/data # unmount
# OS and kernel
uname -a # kernel version and architecture
cat /etc/os-release # distro name and version
hostname # hostname
hostname -I # all IP addresses of this machine
date # current date and time
timedatectl # timezone and NTP status
timedatectl set-timezone UTC # set timezone
Process Management
# View processes
ps aux # all processes, detailed
ps aux | grep nginx # find nginx processes
ps -ef --forest # process tree
pgrep nginx # get PID(s) of nginx
pstree # visual process tree
# Kill processes
kill 1234 # send SIGTERM to PID 1234 (graceful)
kill -9 1234 # send SIGKILL (force kill, cannot be caught)
kill -HUP 1234 # SIGHUP: reload config (nginx, sshd, etc.)
kill -USR2 1234 # SIGUSR2: graceful reload (PHP-FPM)
pkill nginx # kill by process name
killall python3 # kill all processes named python3
# Background and foreground
command & # run command in background
nohup command & # run in background, immune to hangup
nohup ./script.sh > /tmp/out.log 2>&1 & # background with log
jobs # list background jobs in current shell
fg %1 # bring job 1 to foreground
bg %1 # resume job 1 in background
disown %1 # detach job from shell (keeps running after logout)
# Priority
nice -n 10 command # run with lower priority (nice value 10)
renice -n 5 -p 1234 # change running process priority
# Monitoring
watch -n 2 'ps aux --sort=-%cpu | head' # watch top CPU processes every 2s
strace -p 1234 # trace system calls of running process
lsof -p 1234 # list files opened by PID
lsof -i :8080 # which process is using port 8080
Networking Commands
These are the commands you use to debug connectivity, inspect DNS, test endpoints, and diagnose slowdowns between services.
# curl — HTTP Swiss Army knife
curl https://api.example.com/health
curl -s -o /dev/null -w "%{http_code}" https://example.com # just print status code
curl -H "Authorization: Bearer TOKEN" https://api.example.com/data
curl -X POST -H "Content-Type: application/json" -d '{"key":"value"}' https://api.example.com
curl -k https://self-signed-cert.local # skip TLS verification
curl -v https://example.com # verbose — shows headers
curl -L https://example.com # follow redirects
curl --max-time 5 https://slow.server # timeout after 5 seconds
curl -o file.tar.gz https://example.com/release.tar.gz # download to file
# wget — file downloader
wget https://example.com/file.tar.gz
wget -O output.html https://example.com
wget -q --spider https://example.com # check if URL is reachable (no download)
# DNS lookups
dig example.com # full DNS lookup
dig example.com A # A record only
dig example.com MX # MX records
dig @8.8.8.8 example.com # query specific DNS server
dig +short example.com # just the IP
nslookup example.com # alternative DNS tool
host example.com # simple lookup
# Network interfaces and routing
ip addr # show IP addresses (modern)
ip addr show eth0 # specific interface
ip route # show routing table
ip route add 10.0.0.0/8 via 192.168.1.1 # add static route
ifconfig # show interfaces (older systems)
# Open ports and connections
ss -tlnp # TCP listening ports with process names
ss -tulnp # TCP + UDP listening ports
netstat -tlnp # same, on older systems
lsof -i :443 # what's on port 443
# Connectivity testing
ping -c 4 8.8.8.8 # ping 4 times
ping -c 4 -i 0.2 host # fast ping (0.2s interval)
traceroute google.com # trace route to destination
mtr google.com # combined ping + traceroute (live)
nc -zv host 443 # test if port is open (netcat)
nc -zv host 3306 # check MySQL port reachability
telnet host 25 # test SMTP port
SSH
# Connect
ssh user@192.168.1.10
ssh -i ~/.ssh/id_rsa user@host # specify key
ssh -p 2222 user@host # non-standard port
ssh -v user@host # verbose (debug connection issues)
# Generate SSH keys
ssh-keygen -t ed25519 -C "work@example.com" # modern, fast, secure
ssh-keygen -t rsa -b 4096 # RSA 4096 (legacy compatibility)
# Copy public key to server
ssh-copy-id user@host
ssh-copy-id -i ~/.ssh/id_ed25519.pub user@host
# SSH config (~/.ssh/config) — save time, stop retyping options
# Host prod
# HostName 52.201.126.70
# User ubuntu
# IdentityFile ~/.ssh/prod.pem
# ServerAliveInterval 60
# Tunnels
ssh -L 3306:localhost:3306 user@remote-host # local port forward (access remote MySQL locally)
ssh -R 8080:localhost:3000 user@remote-host # remote port forward
ssh -D 1080 user@remote-host # SOCKS5 proxy
# File transfer
scp file.txt user@host:/remote/path/
scp -r /local/dir user@host:/remote/dir
rsync -avz ./src/ user@host:/app/src/ # sync directory, compressed
Compression and Archives
# tar — the standard
tar -czf archive.tar.gz /path/to/dir # create gzipped tarball
tar -cjf archive.tar.bz2 /path/to/dir # create bzip2 tarball (smaller, slower)
tar -xzf archive.tar.gz # extract gzipped tarball
tar -xzf archive.tar.gz -C /opt/ # extract to specific directory
tar -tzf archive.tar.gz # list contents without extracting
tar -czf - ./dir | ssh user@host "cat > /backup/dir.tar.gz" # pipe archive over SSH
# gzip / gunzip
gzip file.txt # compress to file.txt.gz (removes original)
gzip -k file.txt # keep original
gunzip file.txt.gz # decompress
zcat file.txt.gz # view without decompressing
# zip
zip -r archive.zip /path/to/dir
zip -e secrets.zip credentials.txt # encrypt with password
unzip archive.zip
unzip archive.zip -d /target/dir
unzip -l archive.zip # list contents
Package Management
# apt (Debian / Ubuntu)
sudo apt update # refresh package index
sudo apt upgrade # upgrade all installed packages
sudo apt install nginx # install a package
sudo apt remove nginx # remove package (keep config)
sudo apt purge nginx # remove package and config
sudo apt autoremove # remove unused dependencies
apt list --installed # list installed packages
apt search "web server" # search for packages
apt show nginx # detailed package info
# yum / dnf (RHEL / CentOS / Amazon Linux)
sudo yum update
sudo yum install nginx
sudo dnf install nginx # dnf is the modern yum replacement
# Homebrew (macOS)
brew install terraform
brew update && brew upgrade
brew list # list installed
brew info nginx # package info
Disk and Filesystem
# Check for large files consuming space
du -sh /var/log/* | sort -rh | head -20
find / -size +500M -type f 2>/dev/null | sort
# Inodes (can be exhausted even with free disk space)
df -i # inode usage per filesystem
find / -xdev -printf '%h\n' 2>/dev/null | sort | uniq -c | sort -rn | head # dirs with most files
# Disk health
sudo dmesg | grep -i "error\|fail" # kernel disk errors
sudo smartctl -a /dev/sda # SMART disk health (install smartmontools)
# Filesystem
sudo fsck /dev/sdb1 # check filesystem (unmount first)
sudo mkfs.ext4 /dev/sdb1 # format partition as ext4
sudo resize2fs /dev/sdb1 # resize ext4 after partition resize
systemd: Managing Services
sudo systemctl start nginx # start service
sudo systemctl stop nginx # stop service
sudo systemctl restart nginx # restart
sudo systemctl reload nginx # reload config (no downtime)
sudo systemctl status nginx # show status and recent logs
sudo systemctl enable nginx # enable at boot
sudo systemctl disable nginx # disable at boot
sudo systemctl is-active nginx # check if running (exit 0 = yes)
systemctl list-units --type=service # list all services
systemctl list-units --failed # show failed services
# View service logs with journald
journalctl -u nginx # all logs for nginx
journalctl -u nginx -f # follow live
journalctl -u nginx --since "1 hour ago"
journalctl -u nginx --since "2026-03-26 10:00:00"
journalctl -p err -b # errors from current boot
journalctl --disk-usage # how much disk logs use
Git Basics on the Command Line
git status # show working tree status
git log --oneline -20 # compact commit history
git log --oneline --graph --all # visual branch graph
git diff # unstaged changes
git diff --staged # staged changes
git add -p # interactive stage (review each hunk)
git commit -m "fix: correct timeout value"
git push origin main
git stash # temporarily shelve changes
git stash pop # restore stashed changes
git stash list # list all stashes
git branch -a # all branches (local + remote)
git checkout -b feature/new-thing # create and switch to new branch
git cherry-pick abc1234 # apply a specific commit to current branch
git reset HEAD~1 # undo last commit, keep changes staged
git reset --soft HEAD~1 # undo commit, keep changes unstaged
git reset --hard HEAD~1 # undo commit and discard changes (destructive)
git bisect start # start binary search for a bug
git blame file.py | head -20 # who changed each line
Useful One-Liners and Productivity Tips
# Repeat last command with sudo
sudo !!
# Run last command replacing a string
^old^new # replaces first occurrence and re-runs
# Quickly backup a file before editing
cp nginx.conf nginx.conf.bak.$(date +%Y%m%d_%H%M%S)
# Watch a command every 2 seconds
watch -n 2 'ss -tlnp'
# Create a timestamped log
./deploy.sh 2>&1 | tee deploy-$(date +%Y%m%d-%H%M%S).log
# Find which process is using a file
fuser /var/log/app.log
fuser -k /var/log/app.log # kill that process
# Quick HTTP server in current directory (Python 3)
python3 -m http.server 8000
# Check open file descriptors system-wide
sysctl fs.file-nr
# Decode a JWT token (base64 decode payload)
echo "PAYLOAD_PART" | base64 -d
# Replace across multiple files
find . -name "*.conf" -exec sed -i 's/old_value/new_value/g' {} \;
# Count lines of code
find . -name "*.py" | xargs wc -l | sort -rn
# Check last 50 failed logins
sudo lastb | head -50
# Monitor network bandwidth per process
sudo nethogs eth0 # install: apt install nethogs
# Quickly empty a log file without deleting it
truncate -s 0 /var/log/app.log
> /var/log/app.log # same effect
Environment Variables
printenv # print all environment variables
printenv PATH # print specific variable
echo $HOME # reference a variable
export MY_VAR="hello" # set in current shell + child processes
export PATH="$PATH:/opt/myapp/bin" # append to PATH
# Make permanent (add to ~/.bashrc or ~/.zshrc)
echo 'export MY_VAR="hello"' >> ~/.bashrc
source ~/.bashrc # reload without opening new shell
env VAR=value command # set variable only for one command
unset MY_VAR # remove variable
The Bottom Line
No developer memorizes all 100+ of these commands. What matters is knowing they exist so you can reach for the right tool when you need it. The commands you will use every single day: ls, cd, grep, tail -f, ssh, ps aux, df -h, and systemctl status. The ones that will save you hours when something goes wrong: strace, lsof, ss -tlnp, journalctl, and awk for parsing logs.
Bookmark this page and use it as your daily reference. Related tools: Chmod Calculator (numeric to symbolic permissions), Cron Parser (decode cron schedules), SSH Config Generator, Regex Tester (for crafting grep patterns), and Bash Scripting Cheat Sheet to turn these commands into scripts.