UML进行Linux内核调试 --2 网络协议栈测试

这篇文章是跟着前面的一个文章来的:

UML进行Linux内核调试 --1 环境搭建

 

具体如下:

UML配置网络,以及调试网络协议栈

 

参考网址:

http://uml.devloop.org.uk/index.html

http://uml.devloop.org.uk/howto.html

http://user-mode-linux.sourceforge.net/network.html

 

1、下载tunctl文件

http://sourceforge.net/projects/tunctl/files/

2、在宿主机上面安装

现在路径:http://sourceforge.net/projects/tunctl/files/

 

[root@ZhouTianzuoztz0223]# tar xvf tunctl-1.5.tar.gz

tunctl-1.5/

tunctl-1.5/tunctl.c

tunctl-1.5/tunctl.spec

tunctl-1.5/ChangeLog

tunctl-1.5/tunctl.sgml

tunctl-1.5/Makefile

[root@ZhouTianzuoztz0223]# cd tunctl-1.5

[[email protected]]# dir

ChangeLog  Makefile tunctl.c  tunctl.sgml  tunctl.spec

[[email protected]]# make

cc-g -Wall -o tunctl tunctl.c

docbook2mantunctl.sgml

Usingcatalogs: /etc/sgml/sgml-docbook-4.1-1.0-51.el6.cat

Usingstylesheet: /usr/share/sgml/docbook/utils-0.6.14/docbook-utils.dsl#print

Workingon: /home/ztz0223/tunctl-1.5/tunctl.sgml

[[email protected]]#

[[email protected]]#

[[email protected]]# make install

install-d /usr/sbin

installtunctl /usr/sbin

install-d /usr/share/man/man8

installtunctl.8 /usr/share/man/man8

[[email protected]]#

3、添加tap网卡

[[email protected]]#tunctl -t tap1

4、配置tap1的ip:

[[email protected]]# ifconfig tap1 192.168.0.100 up

 

5、完毕后删除网卡可使用命令

[root@darktunctl]# ./tunctl -d tap1

 

 

6、网络配置好了,就可以开始调试UML了,但是调试之前,需要设计主机的路由属性:

 

[root@ZhouTianzuo~]#

[root@ZhouTianzuo~]# echo 1 > /proc/sys/net/ipv4/ip_forward

[root@ZhouTianzuo~]#

[root@ZhouTianzuo~]# iptables -t nat -I POSTROUTING -o eth0 -j MASQUERADE

[root@ZhouTianzuo~]# iptables -I FORWARD -i tap1 -j ACCEPT

[root@ZhouTianzuo~]# iptables -I FORWARD -o tap1 -j ACCEPT

这个表示,主机对于tap1过来的报文都会转发。

 

7、开启gdb

[root@ZhouTianzuo~]# cd /um_linux/linux-3.5.1

[[email protected]]# gdb ./linux

 

8、配置忽略两个信号:

(gdb)handle SIGSEGV pass nostop noprint  

Signal        Stop      Print  Pass to program Description

SIGSEGV       No       No      Yes             Segmentation fault

(gdb)handle SIGUSR1 pass nostop noprint  

Signal        Stop      Print  Pass to program Description

SIGUSR1       No       No      Yes             User defined signal 1

 

9、我们调试网络协议,于是设置下面的断点:

(gdb) info b

Num    Type           Disp EnbAddress    What

1      breakpoint     keep y   0x0804abbd in main atarch/um/os-Linux/main.c:118

       breakpoint already hit 1 time

2      breakpoint     keep y   0x081f2b2a in icmp_send atnet/ipv4/icmp.c:479

3      breakpoint     keep y   0x0804abbd in main atarch/um/os-Linux/main.c:118

4      breakpoint     keep y   0x081f1602 in arp_send at net/ipv4/arp.c:692

 

10、启动,指定网卡为tap1,前面的tuntap一定要带,标识指定使用这个tap1作为eth0的构造

(gdb)run ubda=../root_fs mem=256m eth0=tuntap,tap1

起来后:

console[mc-1] enabled

 ubda: unknown partition table

Choosinga random ethernet address for device eth0

Netdevice0 (b2:76:69:11:70:dd) :

TUN/TAPbackend -

EXT3-fs(ubda): error: couldn't mount because of unsupported optional features (240)

EXT4-fs(ubda): couldn't mount as ext2 due to feature incompatibilities

EXT4-fs(ubda): mounted filesystem with ordered data mode. Opts: (null)

VFS:Mounted root (ext4 filesystem) readonly on device 98:0.

devtmpfs:mounted

Detachingafter fork from child process 4571.

cat:/proc/cmdline: No such file or directory

                Welcome to CentOS

Startingudev: udev: starting version 147

udevd(289): /proc/289/oom_adj is deprecated, please use /proc/289/oom_score_adjinstead.

line_ioctl:tty0: unknown ioctl: 0x541e

[  OK  ]

Settinghostname localhost.localdomain:  [  OK  ]

Settingup Logical Volume Management:   No volumegroups found

[  OK  ]

Checkingfilesystems

Checkingall file systems.

[/sbin/fsck.ext4(1) -- /] fsck.ext4 -a /dev/ubda

ROOT:clean, 18691/98304 files, 120402/393216 blocks

[  OK  ]

Remountingroot filesystem in read-write mode:  EXT4-fs(ubda): re-mounted. Opts: (null)

[  OK  ]

Mountinglocal filesystems:  [  OK  ]

/etc/rc.d/rc.sysinit:line 597: plymouth: command not found

Enabling/etc/fstab swaps:  [  OK  ]

Detachingafter fork from child process 4925.

/etc/rc.d/rc.sysinit:line 662: plymouth: command not found

Detachingafter fork from child process 4946.

Enteringnon-interactive startup

FATAL:Module ipv6 not found.

Bringingup loopback interface:  [  OK  ]

Bringingup interface eth0: 

DeterminingIP information for eth0... failed.

[FAILED]

FATAL:Module ipv6 not found.

Mountingother filesystems:  [  OK  ]

Retriggerfailed udev events[  OK  ]

Startingsshd: Detaching after fork from child process 5159.

[  OK  ]

Detachingafter fork from child process 5166.

 

CentOSrelease 6.3 (Final)

Kernel3.5.1 on an i686

 

localhostlogin: root

Detachingafter fork from child process 5197.

Detachingafter fork from child process 5198.

Detachingafter fork from child process 5199.

Detachingafter fork from child process 5200.

Detachingafter fork from child process 5201.

Lastlogin: Mon Nov 26 01:23:00 on tty0

 

 

[root@localhost~]#

Detachingafter fork from child process 5220.

[root@localhost~]#

Detachingafter fork from child process 5221.

[root@localhost~]#

Detachingafter fork from child process 5222.

 

11、配置UML的ip地址,就可以输入东西了,但是我们看到,eth0设置ip失败:

[root@localhost~]# ifconfig

Detachingafter fork from child process 7271.

eth0      Link encap:Ethernet  HWaddr D6:B0:EB:A8:56:79 

          UP BROADCAST RUNNING MULTICAST  MTU:1500 Metric:1

          RX packets:0 errors:0 dropped:0overruns:0 frame:0

          TX packets:6 errors:0 dropped:0overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:0 (0.0 b)  TX bytes:2052 (2.0 KiB)

          Interrupt:5

 

lo        Link encap:Local Loopback 

          inet addr:127.0.0.1  Mask:255.0.0.0

          UP LOOPBACK RUNNING  MTU:16436 Metric:1

          RX packets:0 errors:0 dropped:0overruns:0 frame:0

          TX packets:0 errors:0 dropped:0overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

 

[root@localhost~]#

 

又因为我们设置的tap1 ip为192.168.0.100,所以设置uml的ip为192.168.0.99

[root@localhost~]# ifconfig eth0 192.168.0.99 netmask 255.255.255.0 up

再看UML的ip:

[root@localhost~]# ifconfig

Detachingafter fork from child process 7298.

eth0      Link encap:Ethernet  HWaddr D6:B0:EB:A8:56:79 

          inet addr:192.168.0.99  Bcast:192.168.0.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500 Metric:1

          RX packets:13 errors:0 dropped:0overruns:0 frame:0

          TX packets:19 errors:0 dropped:0overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:980 (980.0 b)  TX bytes:3214 (3.1 KiB)

          Interrupt:5

 

lo        Link encap:Local Loopback 

          inet addr:127.0.0.1  Mask:255.0.0.0

          UP LOOPBACK RUNNING  MTU:16436 Metric:1

          RX packets:0 errors:0 dropped:0overruns:0 frame:0

          TX packets:0 errors:0 dropped:0overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

 

[root@localhost~]#

 

12、路由网络构造:

实际上,UML和主机的网络关系就是

          UML  <---->    tap1   <----->   host

192.168.0.99     192.168.0.100     10.63.198.85

所以我们配置了的ip之后还要配置默认路由:

[root@localhost~]# route add -net 0.0.0.0 netmask 0.0.0.0 gw 192.168.0.100

或者:

[root@localhost~]# route add default gw 192.168.0.100

 

查看路由表:

[root@localhost~]# route

KernelIP routing table

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface

default         192.168.0.100   0.0.0.0         UG   0      0        0 eth0

192.168.0.0     *               255.255.255.0   U    0      0        0 eth0

 

 

13、测试本地ping过程,ok,开始ping 192.168.0.100

 

直接就断住了,因为我们设置了arp_send函数:

 

[root@localhost~]# ping 192.168.0.100                                

Detachingafter fork from child process 5233.

PING192.168.0.100 (192.168.0.100) 56(84) bytes of data.

 

Breakpoint4, arp_send (type=1, ptype=2054, dest_ip=1677764800, dev=0x17d08800,src_ip=1660987584, dest_hw=0x0, src_hw=0x17d06ea8 "\262vi\021p",<incomplete sequence \335>,

    target_hw=0x0) at net/ipv4/arp.c:692

692     {

(gdb)bt

#0  arp_send (type=1, ptype=2054, dest_ip=1677764800,dev=0x17d08800, src_ip=1660987584, dest_hw=0x0, src_hw=0x17d06ea8"\262vi\021p", <incomplete sequence \335>,

    target_hw=0x0) at net/ipv4/arp.c:692

#1  0x081f1cba in arp_solicit (neigh=0x17cdaaa0,skb=0x17c91740) at net/ipv4/arp.c:378

#2  0x081c1031 in neigh_probe (neigh=0x17cdaaa0)at net/core/neighbour.c:883

#3  0x081c11d3 in __neigh_event_send(neigh=0x17cdaaa0, skb=0x17c91800) at net/core/neighbour.c:1030

#4  0x081c125c in neigh_event_send(neigh=0x17cdaaa0, skb=0x17c91800) at include/net/neighbour.h:318

#5  neigh_resolve_output (neigh=0x17cdaaa0,skb=0x17c91800) at net/core/neighbour.c:1291

#6  0x081d5839 in neigh_output (skb=0x17c91800)at include/net/neighbour.h:360

#7  ip_finish_output2 (skb=0x17c91800) atnet/ipv4/ip_output.c:210

#8  ip_finish_output (skb=0x17c91800) atnet/ipv4/ip_output.c:243

#9  0x081d58a6 in ip_output (skb=0x17c91800) atnet/ipv4/ip_output.c:316

#100x081d36d9 in dst_output (skb=0x17c91800) at include/net/dst.h:435

#11ip_local_out (skb=0x17c91800) at net/ipv4/ip_output.c:110

#120x081d37ba in ip_send_skb (skb=0x17c91800) at net/ipv4/ip_output.c:1383

#130x081d4761 in ip_push_pending_frames (sk=0x17c9c840, fl4=0x17d72c64) atnet/ipv4/ip_output.c:1403

#140x081ed3d1 in raw_sendmsg (iocb=0x17d72ce4, sk=0x17c9c840, msg=0x17d72ecc,len=64) at net/ipv4/raw.c:608

#150x081f5ae9 in inet_sendmsg (iocb=0x17d72ce4, sock=0x1788e300, msg=0x17d72ecc,size=64) at net/ipv4/af_inet.c:746

#160x081aba23 in __sock_sendmsg_nosec (sock=0x1788e300, msg=0x17d72ecc, size=64)at net/socket.c:564

#17__sock_sendmsg (sock=0x1788e300, msg=0x17d72ecc, size=64) at net/socket.c:572

#18sock_sendmsg (sock=0x1788e300, msg=0x17d72ecc, size=64) at net/socket.c:583

#190x081ac40f in __sys_sendmsg (sock=0x1788e300, msg=<value optimized out>,msg_sys=0x17d72ecc, flags=0, used_address=0x0) at net/socket.c:1990

#200x081ac5d2 in sys_sendmsg (fd=3, msg=0x2aab3a34, flags=0) at net/socket.c:2025

#210x081ad0fd in sys_socketcall (call=16, args=0xbfb0f4f0) at net/socket.c:2435

#220x0805b83b in handle_syscall (r=0x17e35430) at arch/um/kernel/skas/syscall.c:35

#230x080691e3 in handle_trap (regs=0x17e35430) atarch/um/os-Linux/skas/process.c:193

#24userspace (regs=0x17e35430) at arch/um/os-Linux/skas/process.c:418

#250x08059716 in fork_handler () at arch/um/kernel/process.c:181

#260x00000000 in ?? ()

 

c掉之后,继续走:

(gdb)c

Continuing.

64bytes from 192.168.0.100: icmp_seq=1 ttl=64 time=26439 ms

64bytes from 192.168.0.100: icmp_seq=2 ttl=64 time=0.347 ms

64bytes from 192.168.0.100: icmp_seq=3 ttl=64 time=0.657 ms

64bytes from 192.168.0.100: icmp_seq=4 ttl=64 time=0.367 ms

64bytes from 192.168.0.100: icmp_seq=5 ttl=64 time=0.586 ms

64bytes from 192.168.0.100: icmp_seq=6 ttl=64 time=0.508 ms

所以可以看到,在uml启动之后,依旧可以做gdb调试。

 

在宿主机上面指定接口为tap1,抓包:

[root@ZhouTianzuo~]# tcpdump -i tap1

tcpdump:verbose output suppressed, use -v or -vv for full protocol decode

listeningon tap1, link-type EN10MB (Ethernet), capture size 65535 bytes

14:44:29.528766IP 192.168.0.99 > 192.168.0.100: ICMP echo request, id 30722, seq 40, length64

14:44:29.528882IP 192.168.0.100 > 192.168.0.99: ICMP echo reply, id 30722, seq 40, length64

14:44:30.533813IP 192.168.0.99 > 192.168.0.100: ICMP echo request, id 30722, seq 41, length64

14:44:30.533890IP 192.168.0.100 > 192.168.0.99: ICMP echo reply, id 30722, seq 41, length64

14:44:31.524817ARP, Request who-has 192.168.0.99 tell 192.168.0.100, length 28

14:44:32.502937ARP, Reply 192.168.0.99 is-at b2:76:69:11:70:dd (oui Unknown), length 28

14:44:32.513951IP 192.168.0.99 > 192.168.0.100: ICMP echo request, id 30722, seq 42, length64

14:44:32.514048IP 192.168.0.100 > 192.168.0.99: ICMP echo reply, id 30722, seq 42, length64

14:44:33.522242IP 192.168.0.99 > 192.168.0.100: ICMP echo request, id 30722, seq 43, length64

14:44:33.522320IP 192.168.0.100 > 192.168.0.99: ICMP echo reply, id 30722, seq 43, length64

14:44:34.532385IP 192.168.0.99 > 192.168.0.100: ICMP echo request, id 30722, seq 44, length64

14:44:34.532459IP 192.168.0.100 > 192.168.0.99: ICMP echo reply, id 30722, seq 44, length64

14:44:35.535540IP 192.168.0.99 > 192.168.0.100: ICMP echo request, id 30722, seq 45, length64

14:44:35.535598IP 192.168.0.100 > 192.168.0.99: ICMP echo reply, id 30722, seq 45, length64

14:44:36.540267IP 192.168.0.99 > 192.168.0.100: ICMP echo request, id 30722, seq 46, length64

14:44:36.540332IP 192.168.0.100 > 192.168.0.99: ICMP echo reply, id 30722, seq 46, length64

14:44:37.546049IP 192.168.0.99 > 192.168.0.100: ICMP echo request, id 30722, seq 47, length64

14:44:37.546099IP 192.168.0.100 > 192.168.0.99: ICMP echo reply, id 30722, seq 47, length64

14:44:38.550155IP 192.168.0.99 > 192.168.0.100: ICMP echo request, id 30722, seq 48, length64

14:44:38.550227IP 192.168.0.100 > 192.168.0.99: ICMP echo reply, id 30722, seq 48, length64

14:44:39.555356IP 192.168.0.99 > 192.168.0.100: ICMP echo request, id 30722, seq 49, length64

 

所以用UML调试内核协议栈还是很不错的。

 

14、测试跨网段ping 过程,分别是宿主机10.63.198.85,和宿主机网络上面的另一台主机10.63.198.99:

[root@localhost~]# ping 10.63.198.85

Detachingafter fork from child process 7290.

PING10.63.198.85 (10.63.198.85) 56(84) bytes of data.

64bytes from 10.63.198.85: icmp_seq=1 ttl=64 time=1.28 ms

64bytes from 10.63.198.85: icmp_seq=2 ttl=64 time=0.425 ms

64bytes from 10.63.198.85: icmp_seq=3 ttl=64 time=0.380 ms

64bytes from 10.63.198.85: icmp_seq=4 ttl=64 time=0.427 ms

64bytes from 10.63.198.85: icmp_seq=5 ttl=64 time=0.379 ms

64bytes from 10.63.198.85: icmp_seq=6 ttl=64 time=0.416 ms

64bytes from 10.63.198.85: icmp_seq=7 ttl=64 time=0.444 ms

64bytes from 10.63.198.85: icmp_seq=8 ttl=64 time=0.431 ms

64bytes from 10.63.198.85: icmp_seq=9 ttl=64 time=0.397 ms

^C

---10.63.198.85 ping statistics ---

9packets transmitted, 9 received, 0% packet loss, time 9034ms

rttmin/avg/max/mdev = 0.379/0.509/1.286/0.276 ms

 

[root@localhost~]# ping 10.63.198.99

Detachingafter fork from child process 7293.

PING10.63.198.99 (10.63.198.99) 56(84) bytes of data.

64bytes from 10.63.198.99: icmp_seq=1 ttl=63 time=2.61 ms

64bytes from 10.63.198.99: icmp_seq=2 ttl=63 time=1.03 ms

^C

---10.63.198.99 ping statistics ---

2packets transmitted, 2 received, 0% packet loss, time 1779ms

rttmin/avg/max/mdev = 1.033/1.822/2.612/0.790 ms

 

 

你可能感兴趣的:(UML进行Linux内核调试 --2 网络协议栈测试)