Noeud:Core Dumps - What Are They?, Noeud « Next »:How to Use a Core Dump, Noeud « Previous »:Try Again..., Noeud « Up »:An example debugging session using gdb
Now the program has caused a Segmentation fault
and dumped a core
.
(If your program just said Segmentation fault
and didn't say core dumped
then your shell has core dumps disabled. Type ulimit -c 10000000
to enable them and run ecount2 again)
What is a Segmentation fault
? How about a core dump
?
Part of the multi-user aspect of UNIX is that programs that attempt to write to memory that doesn't belong to them get caught and killed by the operating system. This is a major factor in the long term stability of UNIX compared to lesser OSes.
A segmentation fault
happens when a process attempts to write to memory outside of the address range that the OS has tagged as accessible to it. Since this is a good sign that the process is out of control, the OS takes immediate and severe action, and kills the process.
To aid in debugging the process, the OS then places a copy of all the memory occupied by the program and it's data into a core
file, which it stores in the programs current directory
. Since in our example we haven't changed that directory, there should now be a file called core
in the same place as ecount2.c
and ecount2
.
bash$ ls -l total 320 -rw------- 1 paul users 65536 Sep 5 21:11 core -rwxr-xr-x 1 paul users 21934 Sep 5 20:45 ecount1 -rw-r--r-- 1 paul users 732 Aug 30 21:01 ecount1.c -rwxr-xr-x 1 paul users 22006 Sep 5 20:45 ecount2 -rw-r--r-- 1 paul users 790 Aug 30 21:02 ecount2.c
Here we see that the file called core
is present in the working directory, and has been given permissions for the user *only* to read and write it. This is because the core file will contain all the information the program knew at the time it crashed, which may include passwords or personal information you had entered into it.
If your program uses lots of memory, then the core dump will be at least as big as the executable size plus the total data size, and possibly a lot bigger, as the OS will dump every page
of memory the program has used in its entirety. This is the reason why some default configurations disable core dumps - they are only useful if you are in a position to debug the process, and can be several megabytes in size.