Showing posts with label os. Show all posts
Showing posts with label os. Show all posts
Wednesday, July 13, 2011
Linux: Memory usage with Exmap
Wow. I can read the output from top just fine, but this little utility is amazing.
Saturday, June 25, 2011
Version-control for the home directory dot-files
Lots of people revision-control the dotfiles in their home directories: .bashrc, .vim, etc. That works ok as long as you can ignore any files not controlled, and most VCSs allow that.
But what if you have several homedirs, and you want to maintain some common files between them. Of course, you also have files that differ. I think I've found an elegant solution: 2-tiered VCS.
Each homedir gets a git repo, which is pulled from one in Dropbox. (You'll see why I use Dropbox instead of GitHub in a minute.) I have a branch for each machine, so I can do some comparisons if I want. The master branch has only the common files, which can be used for seeding a new branch on a new machine.
What if I change a common file? I'd hate to have to merge it on each machine. I could forget easily, and that's a lot of work for every little change.
Instead, I keep the common files in cvs, also in Dropbox. Each local cvs workspace is also added to git. (That's not strictly necessary, but it makes setting up a new machine trivial.) When I change a common file, I just 'cvs commit' that file. On any machine, I can run 'cvs update' at any time.
One of the keys to this is the presence of 'CVS/Entries.Static' in the homedir. Otherwise, 'cvs update' could wreak havoc, as some common files are over-ridden on specific machines. (That's why a simpler solution does not work.) Cvs creates that file for you automatically if you 'cvs co' a single file. Otherwise, you can just 'touch CVS/Entries.Static', and remove unwanted files/directories from 'CVS/Entries'.
Another helpful thing is to commit a file called 'cvsignore' (no dot) into the CVSROOT directory (which is in the repo on Dropbox). It has just a single '*', which means to 'ignore everything not listed explicitly in CVS/Entries'. For sub-directories (e.g. .vim/), add a file called .cvsignore with just a single character, '!', to let cvs see all files there.
Also put '*' in '~/.gitignore', and add/commit that file. Henceforth, you will need 'git add -f' for any new files, but that's not really a bad thing.
The most difficult -- and dangerous -- part is setting up the local git repo. Normally, 'cd ~; git clone URL .' will set up a clone in the current directory, but that only works when the directory is empty. Instead, I came up with this sequence of steps:
So far, this is working extremely well for me, and I have not seen any better ideas out there.
This is helpful in ~/.git/config:
But what if you have several homedirs, and you want to maintain some common files between them. Of course, you also have files that differ. I think I've found an elegant solution: 2-tiered VCS.
Each homedir gets a git repo, which is pulled from one in Dropbox. (You'll see why I use Dropbox instead of GitHub in a minute.) I have a branch for each machine, so I can do some comparisons if I want. The master branch has only the common files, which can be used for seeding a new branch on a new machine.
What if I change a common file? I'd hate to have to merge it on each machine. I could forget easily, and that's a lot of work for every little change.
Instead, I keep the common files in cvs, also in Dropbox. Each local cvs workspace is also added to git. (That's not strictly necessary, but it makes setting up a new machine trivial.) When I change a common file, I just 'cvs commit' that file. On any machine, I can run 'cvs update' at any time.
One of the keys to this is the presence of 'CVS/Entries.Static' in the homedir. Otherwise, 'cvs update' could wreak havoc, as some common files are over-ridden on specific machines. (That's why a simpler solution does not work.) Cvs creates that file for you automatically if you 'cvs co' a single file. Otherwise, you can just 'touch CVS/Entries.Static', and remove unwanted files/directories from 'CVS/Entries'.
Another helpful thing is to commit a file called 'cvsignore' (no dot) into the CVSROOT directory (which is in the repo on Dropbox). It has just a single '*', which means to 'ignore everything not listed explicitly in CVS/Entries'. For sub-directories (e.g. .vim/), add a file called .cvsignore with just a single character, '!', to let cvs see all files there.
Also put '*' in '~/.gitignore', and add/commit that file. Henceforth, you will need 'git add -f' for any new files, but that's not really a bad thing.
The most difficult -- and dangerous -- part is setting up the local git repo. Normally, 'cd ~; git clone URL .' will set up a clone in the current directory, but that only works when the directory is empty. Instead, I came up with this sequence of steps:
git initOf course, the homedir-repo is 'bare', and the relevant branch was set-up safely in a different directory, with lots of testing. We don't want to destroy our homedir by accident!
git remote add origin ~/Dropbox/homedir-repo
git fetch origin
git checkout -f -B mymachine origin/mymachine
So far, this is working extremely well for me, and I have not seen any better ideas out there.
This is helpful in ~/.git/config:
[gc]That way, git will not pack stuff on Dropbox. Pushing to the remote repo will then only add files. Very little will change. (That's the problem with hosting CVS on Dropbox; files are edited or appended for every commit.)
auto = 0
Thursday, December 9, 2010
Linux: Interesting, obscure commands
# First, the most important place for interesting commands:
http://www.commandlinefu.com/commands/browse/sort-by-votes
# Now, a bunch of cut-and-pasted stuff, from a thread...
# to fix the termincal
http://www.commandlinefu.com/commands/browse/sort-by-votes
# Now, a bunch of cut-and-pasted stuff, from a thread...
# to fix the termincal
reset
# or try Ctrl-v Ctrl-o
Or try:
reset='echo "X[mX(BX)0OX[?5lX7X[rX8" | tr "XO" "\033\017" && /usr/bin/reset'ESC [m (actually ESC [0m) Character Attributes: Normal (not bold f.i.)ESC (B Select G0 Character Set: United States (USASCII)ESC )0 Select G1 Character Set: Special Character and Line Drawing SetO ( Ctrl-O ) Switch to Standard Character SetESC [?5l DEC Private Mode Reset: Normal VideoESC 7 Save CursorESC [r weird (actually 'ESC [0;0r' ? Set Scrolling Region [top;bottom] )ESC 8 Restore Cursor
# to turn off display
xset dpms force off
# for virtual terminal
Personally, I think every Linux user should know how to use the virtual terminals. Just hit Ctrl+Alt+F1 and that should take you to a bash prompt. Usually the main one you're on with X running is F7, so you can switch back to that.If X locks up on me, just a simple:Ctrl+Alt+F1login, and run$ sudo /etc/init.d/gdm restartNote that it could be gdm, kdm or xdm depending on your distro.
On RedHat or Ubuntu, you could instead:
$ sudo service gdm restart
Or
$ invoke-rc.d gdm restart # for ubuntu/debian
# Others
GNU-screen (or tmux) is an excellent command (won't have to use nohup again), if you don't have it you should install it and try it.
If you're on a Red Hat based distro, yum and rpm are good to know. If it's Debian based, apt-get anddpkg for installing stuff.
ping, traceroute (or mtr --curses or nstat), ifconfig are all handy for networking stuff.
Look into htop its a much nicer version of top but you may need to enable additional 3rd party repositories if using yum or apt-get (or aptitude). Or nmon, or atop. And pgrep?
# More on screen:
screen (start a screen session)
screen -dr (detach said screen session and reattach it in current sess)
screen -ls (show active screen sessions)
screen -dr [screen session] (detach and resume a specific session)# And for pair programming:
screen -S sessionname (start a session with a name) screen -x sessionname (attach the named session, even if it's attached elsewhere)
Those are essentially the only two I use, with the occasional "screen -ls". I much prefer -x over -r as you can attach in multiple places. So at home I always leave stuff running in screen and when I log in from work or where ever I can attach that same screen session without first detaching it form my terminal at home. Plus you could have two people working together in one "screen" which is good for pair programming. http://www.ibm.com/developerworks/aix/library/au-gnu_screen/
# The magic SysRq key
https://secure.wikimedia.org/wikipedia/en/wiki/Magic_SysRq_key
# General stuff
Strings
- grep
- awk
- uniq, sort, sort -n
- seq
- cut
- wc
Files
- rsync
- lsof
- find | xargs
- locate
- df -H
- du -cks | sort -n
- scp
- strings
- file
- touch
- z* (zgrep, zcat, etc)
- tail -f, head
Administration
- man
- ps auxf (f only on GNU)
- kill, -HUP, -9
- sudo
- screen
- /etc/init.d/ scripts
- id
- ^Z, fg, jobs, &
Networking
- nmap
- dig
- tcpdump
- ifconfig
Operators
- The knowledge that bash is a programming language that provides all your basic constructs (ifs, loops, variables, functions), but instead of having a library of functions, you execute simple programs instead
- |
- <, >, >>
- - as stdin, e.g. "cat somefile.txt | vi -"
- for i in a b c d; do echo $i; something_else $i; done
- alias
- All the goodies at http://samrowe.com/wordpress/advancing-in-the-bash-shell/
# And more
netstat -ano views your open TCP and UDP connections
netstat -tulp # what is listening on which port
# or lsof -i
top -b | grep processname # continuous info about a process, you have to Ctrl+C out of it though
nmap -sS -sV -O localhost # local listening ports and what versions of daemons are running.
# maybe -p 1-65535
xsel --clipboard --input # stdin to clipboard
# OSX
pbcopy # stdin to clipboard
diff this that | vim -
pgrep firef
watch sensors #?
ncdu # to find out where all space is being used
htop > ps # not a redirect
# For bigger programs
mocp, alsamixer, ncdu, htop, emacs, screen, feh, acpi, dpkg, convert
diff -wyW160 this that | less #compare side-by-side
diff -u this that >other #write unified diff
patch Tuesday, October 19, 2010
Win32: Beware string-length restrictions
Some Win32 functions impose constraints on the lengths of strings. E.g. UNICODE_STRING. Here is a nasty example:
http://blogs.msdn.com/b/larryosterman/archive/2010/10/19/because-if-you-do_2c00_-stuff-doesn_2700_t-work-the-way-you-intended_2e00_.aspx
http://blogs.msdn.com/b/larryosterman/archive/2010/10/19/because-if-you-do_2c00_-stuff-doesn_2700_t-work-the-way-you-intended_2e00_.aspx
Monday, October 4, 2010
Python: List of running processes (cross-platform)
On Linux, we have lsof. On Windows, things are much more complicated. Fortunately, there is a cross-platform Python module which can find information about all running processes, psutil.
UNIX/Python: sockets, select, and poll
Doug Hellman has provided a very readable description of how to use sockets, in Python.
Note: Most of this will not work with Windows.
Note: Most of this will not work with Windows.
Wednesday, September 29, 2010
Linux Multi-Threaded "Swap Insanity"
Here is a helpful description of a problem that can be encountered on Linux using the NUMA (Non-Uniform Memory Architecture), often seen with MySQL.
In a nutshell, the machine may bog down with memory-swapping when total memory use is far below what is available, because the default policy is for a processor to prefer the memory in its own node even when swapping occurs.
The simplest solution is to use numactl --interleave.
In a nutshell, the machine may bog down with memory-swapping when total memory use is far below what is available, because the default policy is for a processor to prefer the memory in its own node even when swapping occurs.
The simplest solution is to use numactl --interleave.
Saturday, September 4, 2010
UNIX: stangeness with signals.
Slava Pestov, an author of the language Factor, calls this blogspot post, "2 Things Every Unix Developer Should Know." I have to admit that I do not fully understand how to handle these problems, so I accept with alacrity his advice to use a high-level language instead of diving into UNIX.
Monday, August 30, 2010
Linux: How strace works
Here is a highly instructive article on the innards of strace, a Linux utility for tracing system calls in a process and its children. The sample code turns on ptrace for the child before starting the child, so I am a little curious about how it works when I attach it to a process that is always running.
At a previous job, I saw crashes in vfork() when using strace on make. I'm told that it should work, so I have to assume that it was caused by yet more memory bugs in the poorly written code at that company. Now I wish I'd saved the crashing example so that I could dive into it further.
At a previous job, I saw crashes in vfork() when using strace on make. I'm told that it should work, so I have to assume that it was caused by yet more memory bugs in the poorly written code at that company. Now I wish I'd saved the crashing example so that I could dive into it further.
Wednesday, August 25, 2010
Maximum argument length in Linux
From a 2004 Slashdot interview with Rob Pike:
Another idea, from Lyren Brown:
I didn't use Unix at all, really, from about 1990 until 2002, when I joined Google. (I worked entirely on Plan 9, which I still believe does a pretty good job of solving those fundamental problems.) I was surprised when I came back to Unix how many of even the little things that were annoying in 1990 continue to annoy today. In 1975, when the argument vector had to live in a 512-byte-block, the 6th Edition system would often complain, 'arg list too long'. But today, when machines have gigabytes of memory, I still see that silly message far too often. The argument list is now limited somewhere north of 100K on the Linux machines I use at work, but come on people, dynamic memory allocation is a done deal!Pike is referring to this problem, most common when a '*' wildcard expands to too many files. For the examples, I would say that 3a/b might as well be a Perl/Python/Ruby script. I would also add Example 2b:
% find -X $directory1 -name '*' -depth 1 -type f | xargs mv --target-directory=$directory2(if --target-directory is available on mv) since xargs already holds the line-length well below ARG_MAX. Or just use the little-known plus sign in find:
% find $directory1 -name '*' -depth 1 -type f -exec mv {} $directory2 +That might be the fastest solution.
Another idea, from Lyren Brown:
for f in *foo*; do echo $f; done | tar -T/dev/stdin -cf - | tar -C/dest/path -xvf -Apparently, the latest Linux kernel finally removes any practical limit.
Sunday, August 15, 2010
Errors in moving to 64-bit architecture
Here are some errors that one may encounter when compiling old code on a new, 64-bit architecture. I have seen all but #7, and I have seen other errors similar to that one, involving how headers are included. Many, fortunately, would be caught by examination of compiler warnings, but explicit casts will usually suppress those warnings.
The main advice seems to be: Let the compiler help you.
The main advice seems to be: Let the compiler help you.
Wednesday, August 11, 2010
bash: little-known alias trick
According to the bash manual:
$ alias ll="ls -al"
$ alias s="ssh me@mymachine.com "
$ s ll
(long listing of all files...)
If the last character of the alias value is a space or tab character, then the next command word following the alias is also checked for alias expansion.By default, alias expansion is not performed in a non-interactive shell (unless 'shopt expand_aliases'). Besides, if you are using ssh, the account on the server might not have all the aliases you are used to. The trick is to alias ssh with an extra space.
$ alias ll="ls -al"
$ alias s="ssh me@mymachine.com "
$ s ll
(long listing of all files...)
Monday, August 9, 2010
Better than make?
As mentioned earlier, I have switched from bash to rc. To avoid a problem with the Mac Ports version, I installed it from User Space, which also provides mk, a replacement for make.
There are many good reasons to prefer mk, but for me the strongest is to avoid vfork(). Here is the comment thread from reddit:
There are many good reasons to prefer mk, but for me the strongest is to avoid vfork(). Here is the comment thread from reddit:
uriel says:
Better than bash?
I avoided bash (the GNU extension of the Bourne shell) for a long time for one reason:
prog1 |& prog2
I have always been annoyed by the bash equivalent:
prog1 2>&1 | prog2
When I learned that a new version of bash allowed the |&, I made the switch and have been very happy.
Recently, I learned about something that could be better than bash: rc. The docs for rc are very interesting and amply justify the switch.
There is one caveat, pointed out by uriel on reddit:
prog1 |& prog2
I have always been annoyed by the bash equivalent:
prog1 2>&1 | prog2
When I learned that a new version of bash allowed the |&, I made the switch and have been very happy.
Recently, I learned about something that could be better than bash: rc. The docs for rc are very interesting and amply justify the switch.
Instead of switching immediately, I am moving gradually by putting
SHELL=rc
into my makefiles. So far, I am very happy with it.There is one caveat, pointed out by uriel on reddit:
I suspect the 'rc' in MacPorts is an old re-implementation which has some serious flaws.
The original rc, plus all the great Plan 9 commands are available as part of Plan 9 from User Space which runs great on OS X (Russ Cox, the author, is also one of the main Google Go developers, and uses OS X as his main development platform, with p9p's acme of course ;)).
Subscribe to:
Comments (Atom)