前言:dhcp的流程分为四步,dhcp discover+dhcp offer+dhcp request+dhcp ack,那么包具体是怎么收发的呢?
dhcp开始时进行了一些初始化的操作
class DhcpState extends State {
@Override
public void enter() {
clearDhcpState();
if (initInterface() && initSockets()) {
mReceiveThread = new ReceiveThread();
mReceiveThread.start();
} else {
notifyFailure();
transitionTo(mStoppedState);
}
}
先只看socket相关的
private boolean initSockets() {
return initPacketSocket() && initUdpSocket();
}
看下相关注释
// Sockets.
// - We use a packet socket to receive, because servers send us packets bound for IP addresses
// which we have not yet configured, and the kernel protocol stack drops these.
// - We use a UDP socket to send, so the kernel handles ARP and routing for us (DHCP servers can
// be off-link as well as on-link).
private FileDescriptor mPacketSock;
private FileDescriptor mUdpSock;
表明packet socket是用来收包的,udpSocket是用来发包的,即
dhcp discover(udpSocket)+
dhcp offer(packet socket)+
dhcp request(udpSocket)+
dhcp ack(packet socket)
看下packet socket是如何收包的
private boolean initPacketSocket() {
try {
mPacketSock = Os.socket(AF_PACKET, SOCK_RAW, ETH_P_IP);
PacketSocketAddress addr = new PacketSocketAddress((short) ETH_P_IP, mIface.index);
Os.bind(mPacketSock, addr);
NetworkUtils.attachDhcpFilter(mPacketSock);
} catch(SocketException|ErrnoException e) {
Log.e(TAG, "Error creating packet socket", e);
return false;
}
return true;
}
这边初始化了一下,并且将其作为一个dhcpFilter,用来过滤dhcp的包
至于收包的逻辑应该在这里,轮询阻塞进行收包
class ReceiveThread extends Thread {
private final byte[] mPacket = new byte[DhcpPacket.MAX_LENGTH];
private volatile boolean mStopped = false;
public void halt() {
mStopped = true;
closeSockets(); // Interrupts the read() call the thread is blocked in.
}
@Override
public void run() {
if (DBG) Log.d(TAG, "Receive thread started");
while (!mStopped) {
int length = 0; // Or compiler can't tell it's initialized if a parse error occurs.
try {
length = Os.read(mPacketSock, mPacket, 0, mPacket.length);
DhcpPacket packet = null;
packet = DhcpPacket.decodeFullPacket(mPacket, length, DhcpPacket.ENCAP_L2);
if (DBG) Log.d(TAG, "Received packet: " + packet);
sendMessage(CMD_RECEIVED_PACKET, packet);
} catch (IOException|ErrnoException e) {
if (!mStopped) {
Log.e(TAG, "Read error", e);
logError(DhcpErrorEvent.RECEIVE_ERROR);
}
} catch (DhcpPacket.ParseException e) {
Log.e(TAG, "Can't parse packet: " + e.getMessage());
if (PACKET_DBG) {
Log.d(TAG, HexDump.dumpHexString(mPacket, 0, length));
}
if (e.errorCode == DhcpErrorEvent.DHCP_NO_COOKIE) {
int snetTagId = 0x534e4554;
String bugId = "31850211";
int uid = -1;
String data = DhcpPacket.ParseException.class.getName();
EventLog.writeEvent(snetTagId, bugId, uid, data);
}
logError(e.errorCode);
}
}
if (DBG) Log.d(TAG, "Receive thread stopped");
}
}
NetworkUtils
/**
* Attaches a socket filter that accepts DHCP packets to the given socket.
*/
public native static void attachDhcpFilter(FileDescriptor fd) throws SocketException;
是个native方法,附着一个过滤dhcp 包的socket filter 以发给对应的socket
看下对应实现
http://androidxref.com/9.0.0_r3/xref/frameworks/base/core/jni/android_net_NetUtils.cpp#android_net_utils_attachDhcpFilter
static void android_net_utils_attachDhcpFilter(JNIEnv *env, jobject clazz, jobject javaFd)
{
struct sock_filter filter_code[] = {
// Check the protocol is UDP.
BPF_STMT(BPF_LD | BPF_B | BPF_ABS, kIPv4Protocol),
BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, IPPROTO_UDP, 0, 6),
// Check this is not a fragment.
BPF_STMT(BPF_LD | BPF_H | BPF_ABS, kIPv4FlagsOffset),
BPF_JUMP(BPF_JMP | BPF_JSET | BPF_K, IP_OFFMASK, 4, 0),
// Get the IP header length.
BPF_STMT(BPF_LDX | BPF_B | BPF_MSH, kEtherHeaderLen),
// Check the destination port.
BPF_STMT(BPF_LD | BPF_H | BPF_IND, kUDPDstPortIndirectOffset),
BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, kDhcpClientPort, 0, 1),
// Accept or reject.
BPF_STMT(BPF_RET | BPF_K, 0xffff),
BPF_STMT(BPF_RET | BPF_K, 0)
};
struct sock_fprog filter = {
sizeof(filter_code) / sizeof(filter_code[0]),
filter_code,
};
int fd = jniGetFDFromFileDescriptor(env, javaFd);
if (setsockopt(fd, SOL_SOCKET, SO_ATTACH_FILTER, &filter, sizeof(filter)) != 0) {
jniThrowExceptionFmt(env, "java/net/SocketException",
"setsockopt(SO_ATTACH_FILTER): %s", strerror(errno));
}
}
看下什么是BPF
柏克莱封包过滤器(Berkeley Packet Filter,缩写 BPF),是类Unix系统上数据链路层的一种原始接口,提供原始链路层封包的收发,除此之外,如果网卡驱动支持洪泛模式,那么它可以让网卡处于此种模式,这样可以收到网络上的所有包,不管他们的目的地是不是所在主机。
另外,BPF支持“过滤”封包,这样BPF会只把“感兴趣”的封包到上层软件,可以避免从操作系统内核向用户态复制其他封包,降低抓包的CPU的负担以及所需的缓冲区空间,从而减少丢包率。BPF的过滤功能是以BPF虚拟机机器语言的解释器的形式实现的,这种语言的程序可以抓取封包数据,对封包中的数据采取算术操作,并将结果与常量或封包中的数据或结果中的测试位比较,根据比较的结果决定接受还是拒绝封包。在一些平台上,包括FreeBSD和WinPcap,即时编译技术用于把虚拟机指令转换为原始码,以进一步减少开销。
看起来filter_code 应该是对应上面所说的虚拟机器语言的解释器,简单来说就是对包进行过滤,得到自己想要的。
dhcp的流程分为四步,dhcp discover+dhcp offer+dhcp request+dhcp ack,不可避免涉及到包的收发,其中packet socket是用来进行包的过滤以及接收的,udp socket是用来发送包的;
另外packet socket进行包的过滤是使用了一项称为BPF的技术,待进一步学习filter的写法。
待续
学习 BPF进阶 - BPF常用命令