Selasa, 18 Februari 2014

Digital Forensic: Acquire Large Capacity Disk


Split Large Capacity Disk

list device from yaour laptop or compter
root@bt:~/evid# fdisk -l

root@bt:~/evid# fdisk -l
WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0xb05cd80c
 Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      102400    7  HPFS/NTFS
Partition 1 does not end on cylinder boundary.
/dev/sda2              13       12749   102297600    7  HPFS/NTFS
/dev/sda3           12749       18828    48828125   83  Linux
/dev/sda4           18828       60801   337154049    f  W95 Ext'd (LBA)
Partition 4 does not start on physical sector boundary.
/dev/sda5           18828       19077     1999872   82  Linux swap / Solaris
/dev/sda6           19077       60801   335153152    7  HPFS/NTFS
Disk /dev/sdc: 7803 MB, 7803174912 bytes
241 heads, 62 sectors/track, 1019 cylinders
Units = cylinders of 14942 * 512 = 7650304 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00044125

get informatin device de/sdc is 8Gb space , we will DD /dev/sdc to create image file we clone Flasdisk
root@bt:~/evid# dd if=/dev/sdc of=image.disk2.dd
use ls to see file and folder in evid folder,

split normally works on lines of input (i.e. from a text file). But if we use the –b option, we force split to treat the file as binary input and lines are ignored.
In newer versions of split we can also use the -d option to give us numerical numbering (*.01, *.02, *.03, etc.)

split -d -b XXm <file to be split> <prefix of output files>
split image.disk2.dd
root@bt:~/evid# split -d -b 2000m image.disk2.dd image.split2.
This would result in 2 files (8GB in size) each named with the prefix “image.split.” as specified in the command, followed by “01”, “02” and so
on (assuming a newer version of split that supports the -d option is used):
root@bt:~/evid# ls image.split2.*

The process can be reversed. If we want to reassemble the image from
the split parts (from CD-R, etc.), we can use the cat command and redirect the
output to a new file.
root@bt:~/evid# cat image.split2.00 image.split2.01 > image2.new
OR
root@bt:~/evid# cat image.split2.0* > image2.new
look hasing
root@bt:~/evid# cat image.split2.0* | md5sum
root@bt:~/evid# md5sum image2.new 

Looking at the output of the above commands, we see that all the sha1sum’s match (don't confuse sha1sum output with md5sum output). We find the same hash for the disk, for the split images “cat-ed” together, and for the newly reassembled image.


Data carving using DD
download and copy to /evid
Have a brief look at the file image_carve.raw with your wonderful command line hexdump tool, xxd:
root@bt:~/evid# xxd image_carve.raw | less

Find the start of the JPEG (xxd and grep)
root@bt:~/evid# xxd image_carve.raw | grep ffd8
00052a0: b4f1 559c ffd8 ffe0 0010 4a46 4946 0001 ..U.......JFIF..

Now we can calculate the byte offset in decimal
root@bt:~/evid# echo "ibase=16;00052A0" | bc
21152

So we add 4 to the start of the line. Our offset is now 21156
Now it’s time to find the end of the file.
root@bt:~/evid# xxd -s 21156 image_carve.raw | grep ffd9
0006c74: ffd9 d175 650b ce68 4543 0bf5 6705 a73c ...ue..hEC..g..<
calculate decimal
root@bt:~/evid# echo "ibase=16;0006C74" | bc
27764

Now that we know the start and the end of the file, we can calculate the size:
nclude the ffd9 (giving us 27766)
root@bt:~/evid# echo "27766 - 21156" | bc
6610

We now know the file is 6610 bytes in size, and it starts at byte offset 21156. The carving is the easy part! We will use dd with three options: skip= how far into the data chuck we begin “cutting”. bs= (block size) the number of bytes we include as a “block”. count = the number of blocks we will be “cutting”.
root@bt:~/evid# dd if=image_carve.raw of=carv.jpg skip=21156 bs=1 count=6610
6610+0 records in
6610+0 records out
6610 bytes (6.6 kB) copied, 0.0285196 s, 232 kB/s
our current directory called carv.jpg. If you are in X, simply use the xv command to view the file
xv from a command line (while in an X session) will display the graphic image in it's own window.

LIBEWF - Working with Expert Witness Files
The libewf tools and detailed project information can be found at:
https://www.uitwisselplatform.nl/projects/libewf/
We will cover the following tools briefly here:
ewfinfo
ewfverify
ewfexport
ewfacquire
ewfacquirestream

ewfinfo
root@bt:~/evid# ewfinfo ntfs_pract.E01

ewfverify
root@bt:~/evid# ewfverify ntfs_pract.E01



ewfexport
root@bt:~/evid# ewfexport ntfs_pract.E01 | md5sum
 

root@bt:~/evid# ewfexport -t ntfs_image.dd ntfs_pract.E01

root@bt:~/evid# md5sum ntfs_image.dd
d3c4659e4195c6df1da3afdbdc0dce8f ntfs_image.dd

ewfacquire
root@bt:~/evid# ewfacquire /dev/sdc
Acquiry parameters required, please provide the necessary input
Image path and filename without extension: /root/ntfs_ewf
Case number: 111-222
Description: Removable media (generic thumdrive)
Evidence number: 1
Examiner name: Umar Alfaruq
Notes: Seized from subject
Media type (fixed, removable, optical, memory) [removable]: removable disk
Selected option not supported, please try again or terminate using Ctrl^C.
Media type (fixed, removable, optical, memory) [removable]: removable
Media characteristics (logical, physical) [logical]: physical
Use compression (none, empty-block, fast, best) [none]: fast
Use EWF file format (ewf, smart, ftk, encase1, encase2, encase3, encase4, encase5, encase6, linen5, linen6, ewfx) [encase6]: encase5
Start to acquire at offset (0 >= value >= 4040748544) [0]:
The amount of bytes to acquire (0 >= value >= 4040748544) [4040748544]:
Evidence segment file size in bytes (1.0 MiB >= value >= 1.9 GiB) [1.4 GiB]:
The amount of bytes per sector (0 >= value >= 4294967295) [512]:
The amount of sectors to read at once (64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768) [64]:
The amount of sectors to be used as error granularity (1 >= value >= 64) [64]:
The amount of retries when a read error occurs (0 >= value >= 255) [2]:
Wipe sectors on read error (mimic EnCase like behavior) (yes, no) [no]: yes

Tidak ada komentar:

Posting Komentar