Netty学习指南(资料、文章汇总)
non-blocking io 非阻塞 IO
channel 有一点类似于 stream,它就是读写数据的双向通道,可以从 channel 将数据读入 buffer,也可以将 buffer 的数据写入 channel,而之前的 stream 要么是输入,要么是输出(如:输入流只能读取数据,输出流只能写入数据),channel 比 stream 更为底层
常见的 Channel 有
buffer 则用来缓冲读写数据(就是一个暂存数据的缓冲区,用来暂存从channel中读取到的数据,或者将数据写出到channel之前要先把数据写到buffer中),常见的 buffer 有
selector 单从字面意思不好理解,需要结合服务器的设计演化来理解它的用途
在nio出来之前,服务器怎么处理客户端的连接请求?如下图,每个线程专门处理 一个socket连接(就好比:一个餐馆,每来一个客人,就雇佣一个服务端专门服务这个客人,但成本太高)。
为了解决上面由于连接客户端数量过多,导致线程数量过多的问题,因此采用线程池来处理。
selector 的作用就是配合一个线程来管理多个 channel,获取这些 channel 上发生的事件(事件如:可连接、可读、可写),这些 channel 工作在非阻塞模式下,不会让线程吊死在一个 channel 上(与前面线程池版区别:一个线程只有等socket断开了连接,这个线程才能处理下socket客户端请求)。适合连接数特别多,但流量低的场景(low traffic)(这句话的意思是:当某个客户端要发送大量数据时,线程就一直在处理这个客户端,其它客户端的请求就被暂时搁置)
如下图(thread相当于是一个服务员,channel是客人,selector就相当于是可以监测到所有客人需求的一个工具-就相当于是一个监视所有客人的监视器,一旦有某个客人有某些需求,selector就可以知道,然后把需求派给服务员提供服务,这样就不用像之前多线程版—那样一个客人就雇佣一个服务员,线程池版—一个服务员只有等服务员一个客人之后,才能服务下一个客人)
调用 selector 的 select() 会阻塞直到 channel 发生了读写就绪事件,这些事件发生,select 方法就会返回这些事件交给 thread 来处理
有一普通文本文件 data.txt,内容为
1234567890abcd
使用 FileChannel 来读取文件内容
@Slf4j
public class ChannelDemo1 {
public static void main(String[] args) {
try (RandomAccessFile file = new RandomAccessFile("helloword/data.txt", "rw")) {
FileChannel channel = file.getChannel();
ByteBuffer buffer = ByteBuffer.allocate(10);
do {
// 向 buffer 写入
int len = channel.read(buffer);
log.debug("读到字节数:{}", len);
if (len == -1) {
break;
}
// 切换 buffer 读模式
buffer.flip();
while(buffer.hasRemaining()) {
log.debug("{}", (char)buffer.get());
}
// 切换 buffer 写模式
buffer.clear();
} while (true);
} catch (IOException e) {
e.printStackTrace();
}
}
}
输出
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 读到字节数:10
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 1
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 2
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 3
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 4
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 5
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 6
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 7
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 8
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 9
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 0
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 读到字节数:4
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - a
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - b
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - c
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - d
10:39:03 [DEBUG] [main] c.i.n.ChannelDemo1 - 读到字节数:-1
课堂示例
@Slf4j
public class TestByteBuffer {
public static void main(String[] args) {
// FileChannel
// 1. 输入输出流, 2. RandomAccessFile
try (FileChannel channel = new FileInputStream("data.txt").getChannel()) {
// 准备缓冲区
ByteBuffer buffer = ByteBuffer.allocate(10);
while(true) {
// 从 channel 读取数据,(就是向 buffer 写入)
int len = channel.read(buffer);
log.debug("读取到的字节数 {}", len);
if(len == -1) { // 没有内容了
break;
}
// 打印 buffer 的内容
buffer.flip(); // 切换至读模式
while(buffer.hasRemaining()) { // 是否还有剩余未读数据
byte b = buffer.get();
log.debug("实际字节 {}", (char) b);
}
buffer.clear(); // 切换为写模式
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
输出
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 读取到的字节数 10
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 1
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 2
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 3
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 4
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 5
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 6
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 7
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 8
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 9
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 0
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 读取到的字节数 5
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 a
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 b
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节 c
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 实际字节
23:08:48 [DEBUG] [main] c.i.n.c.TestByteBuffer - 读取到的字节数 -1
ByteBuffer 有以下重要属性
一开始(position为0,limit 与 capacity相同值)
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-2KFVtKgj-1690207745252)(assets/0021.png)]
写模式下,position 是写入位置,limit 等于容量,下图表示写入了 4 个字节后的状态
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-AGA2LNK9-1690207745254)(assets/0018.png)]
flip 动作发生后,position 切换为读取位置,limit 切换为读取限制(limit通过获取position的原位置作为读取限制)
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-BRPws6VF-1690207745254)(assets/0019.png)]
读取 4 个字节后,状态
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-BApF13ew-1690207745254)(assets/0020.png)]
clear 动作发生后,状态
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-AC5W1uhp-1690207745255)(…/raw/img/0021.png)]
compact 方法,是把未读完的部分向前压缩,然后切换至写模式
(它也会将bytebuffer切换至写模式,只不过如果存在还没有读的数据的话,会把未读的数据向前压缩)
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-U6S2mpPX-1690207745256)(assets/0022.png)]
public class ByteBufferUtil {
private static final char[] BYTE2CHAR = new char[256];
private static final char[] HEXDUMP_TABLE = new char[256 * 4];
private static final String[] HEXPADDING = new String[16];
private static final String[] HEXDUMP_ROWPREFIXES = new String[65536 >>> 4];
private static final String[] BYTE2HEX = new String[256];
private static final String[] BYTEPADDING = new String[16];
static {
final char[] DIGITS = "0123456789abcdef".toCharArray();
for (int i = 0; i < 256; i++) {
HEXDUMP_TABLE[i << 1] = DIGITS[i >>> 4 & 0x0F];
HEXDUMP_TABLE[(i << 1) + 1] = DIGITS[i & 0x0F];
}
int i;
// Generate the lookup table for hex dump paddings
for (i = 0; i < HEXPADDING.length; i++) {
int padding = HEXPADDING.length - i;
StringBuilder buf = new StringBuilder(padding * 3);
for (int j = 0; j < padding; j++) {
buf.append(" ");
}
HEXPADDING[i] = buf.toString();
}
// Generate the lookup table for the start-offset header in each row (up to 64KiB).
for (i = 0; i < HEXDUMP_ROWPREFIXES.length; i++) {
StringBuilder buf = new StringBuilder(12);
buf.append(NEWLINE);
buf.append(Long.toHexString(i << 4 & 0xFFFFFFFFL | 0x100000000L));
buf.setCharAt(buf.length() - 9, '|');
buf.append('|');
HEXDUMP_ROWPREFIXES[i] = buf.toString();
}
// Generate the lookup table for byte-to-hex-dump conversion
for (i = 0; i < BYTE2HEX.length; i++) {
BYTE2HEX[i] = ' ' + StringUtil.byteToHexStringPadded(i);
}
// Generate the lookup table for byte dump paddings
for (i = 0; i < BYTEPADDING.length; i++) {
int padding = BYTEPADDING.length - i;
StringBuilder buf = new StringBuilder(padding);
for (int j = 0; j < padding; j++) {
buf.append(' ');
}
BYTEPADDING[i] = buf.toString();
}
// Generate the lookup table for byte-to-char conversion
for (i = 0; i < BYTE2CHAR.length; i++) {
if (i <= 0x1f || i >= 0x7f) {
BYTE2CHAR[i] = '.';
} else {
BYTE2CHAR[i] = (char) i;
}
}
}
/**
* 打印所有内容
* @param buffer
*/
public static void debugAll(ByteBuffer buffer) {
int oldlimit = buffer.limit();
buffer.limit(buffer.capacity());
StringBuilder origin = new StringBuilder(256);
appendPrettyHexDump(origin, buffer, 0, buffer.capacity());
System.out.println("+--------+-------------------- all ------------------------+----------------+");
System.out.printf("position: [%d], limit: [%d]\n", buffer.position(), oldlimit);
System.out.println(origin);
buffer.limit(oldlimit);
}
/**
* 打印可读取内容
* @param buffer
*/
public static void debugRead(ByteBuffer buffer) {
StringBuilder builder = new StringBuilder(256);
appendPrettyHexDump(builder, buffer, buffer.position(), buffer.limit() - buffer.position());
System.out.println("+--------+-------------------- read -----------------------+----------------+");
System.out.printf("position: [%d], limit: [%d]\n", buffer.position(), buffer.limit());
System.out.println(builder);
}
private static void appendPrettyHexDump(StringBuilder dump, ByteBuffer buf, int offset, int length) {
if (isOutOfBounds(offset, length, buf.capacity())) {
throw new IndexOutOfBoundsException(
"expected: " + "0 <= offset(" + offset + ") <= offset + length(" + length
+ ") <= " + "buf.capacity(" + buf.capacity() + ')');
}
if (length == 0) {
return;
}
dump.append(
" +-------------------------------------------------+" +
NEWLINE + " | 0 1 2 3 4 5 6 7 8 9 a b c d e f |" +
NEWLINE + "+--------+-------------------------------------------------+----------------+");
final int startIndex = offset;
final int fullRows = length >>> 4;
final int remainder = length & 0xF;
// Dump the rows which have 16 bytes.
for (int row = 0; row < fullRows; row++) {
int rowStartIndex = (row << 4) + startIndex;
// Per-row prefix.
appendHexDumpRowPrefix(dump, row, rowStartIndex);
// Hex dump
int rowEndIndex = rowStartIndex + 16;
for (int j = rowStartIndex; j < rowEndIndex; j++) {
dump.append(BYTE2HEX[getUnsignedByte(buf, j)]);
}
dump.append(" |");
// ASCII dump
for (int j = rowStartIndex; j < rowEndIndex; j++) {
dump.append(BYTE2CHAR[getUnsignedByte(buf, j)]);
}
dump.append('|');
}
// Dump the last row which has less than 16 bytes.
if (remainder != 0) {
int rowStartIndex = (fullRows << 4) + startIndex;
appendHexDumpRowPrefix(dump, fullRows, rowStartIndex);
// Hex dump
int rowEndIndex = rowStartIndex + remainder;
for (int j = rowStartIndex; j < rowEndIndex; j++) {
dump.append(BYTE2HEX[getUnsignedByte(buf, j)]);
}
dump.append(HEXPADDING[remainder]);
dump.append(" |");
// Ascii dump
for (int j = rowStartIndex; j < rowEndIndex; j++) {
dump.append(BYTE2CHAR[getUnsignedByte(buf, j)]);
}
dump.append(BYTEPADDING[remainder]);
dump.append('|');
}
dump.append(NEWLINE +
"+--------+-------------------------------------------------+----------------+");
}
private static void appendHexDumpRowPrefix(StringBuilder dump, int row, int rowStartIndex) {
if (row < HEXDUMP_ROWPREFIXES.length) {
dump.append(HEXDUMP_ROWPREFIXES[row]);
} else {
dump.append(NEWLINE);
dump.append(Long.toHexString(rowStartIndex & 0xFFFFFFFFL | 0x100000000L));
dump.setCharAt(dump.length() - 9, '|');
dump.append('|');
}
}
public static short getUnsignedByte(ByteBuffer buffer, int index) {
return (short) (buffer.get(index) & 0xFF);
}
}
import static cn.itcast.nio.c2.ByteBufferUtil.debugAll;
public class TestByteBufferReadWrite {
public static void main(String[] args) {
ByteBuffer buffer = ByteBuffer.allocate(10);
buffer.put((byte) 0x61); // 'a'
// 使用调试工具查看buffer对象的内存简图
debugAll(buffer);
buffer.put(new byte[]{0x62, 0x63, 0x64}); // b c d
debugAll(buffer);
// System.out.println(buffer.get());
// 切换至读模式
buffer.flip();
System.out.println(buffer.get());
debugAll(buffer);
// 注意这个时候,当前这个案例的最后一个字节(即 d),它并没有被清除(可以看下图-2个64),
// 因为后面写的时候,它会被覆盖
buffer.compact();
debugAll(buffer);
buffer.put(new byte[]{0x65, 0x6f});
debugAll(buffer);
}
}
/* 输出如下 */
+--------+-------------------- all ------------------------+----------------+
position: [1], limit: [10]
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 61 00 00 00 00 00 00 00 00 00 |a......... |
+--------+-------------------------------------------------+----------------+
+--------+-------------------- all ------------------------+----------------+
position: [4], limit: [10]
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 61 62 63 64 00 00 00 00 00 00 |abcd...... |
+--------+-------------------------------------------------+----------------+
97
+--------+-------------------- all ------------------------+----------------+
position: [1], limit: [4]
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 61 62 63 64 00 00 00 00 00 00 |abcd...... |
+--------+-------------------------------------------------+----------------+
+--------+-------------------- all ------------------------+----------------+
position: [3], limit: [10]
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 62 63 64 64 00 00 00 00 00 00 |bcdd...... |
+--------+-------------------------------------------------+----------------+
+--------+-------------------- all ------------------------+----------------+
position: [5], limit: [10]
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 62 63 64 65 6f 00 00 00 00 00 |bcdeo..... |
+--------+-------------------------------------------------+----------------+
可以使用 allocate 方法为 ByteBuffer 分配空间,其它 buffer 类也有该方法
Bytebuffer buf = ByteBuffer.allocate(16); // 容量一旦分配,在后面就不可改变
import java.nio.ByteBuffer;
public class TestByteBufferAllocate {
public static void main(String[] args) {
System.out.println(ByteBuffer.allocate(16).getClass());
System.out.println(ByteBuffer.allocateDirect(16).getClass());
/*
class java.nio.HeapByteBuffer - java 堆内存,读写效率较低,受到 GC 的影响
class java.nio.DirectByteBuffer - 直接内存,读写效率高(少一次拷贝),不会受 GC 影响,分配的效率低
*/
}
}
有两种办法
int readBytes = channel.read(buf);
和
buf.put((byte)127); // 每写入一个字节, buf的position会向后移动1位
buffer.put(new byte[]{0x62, 0x63, 0x64, 'e', 'f', 'g'}); // b c d e f g
// 1. 注意put之后须调用flip方法切换到读模式,才能读取到
// 2. 当buffer被填满时, 就不能往buffer中继续put了, 否则报错:BufferOverflowException
同样有两种办法
int writeBytes = channel.write(buf);
和
byte b = buf.get(); // 从buffer读取一个字节, 每读取一个字节, position会向后移动1位
buffer.get(new byte[4]); // 将buffer读取到指定的字节数组中去, 注意这个length不能超过buf中剩余可读字节数量大小, 否则报错: BufferUnderflowException
get 方法会让 position 读指针向后走,如果想重复读取数据
public class TestByteBufferRead {
public static void main(String[] args) {
ByteBuffer buffer = ByteBuffer.allocate(10);
buffer.put(new byte[]{'a', 'b', 'c', 'd'});
// flip 切换至读模式
buffer.flip();
buffer.get(new byte[4]); // 注意这里数组长度不能超过buffer的可读字节数量4(limit - position => 4-0)
debugAll(buffer); // 可以直接拿buffer.remaining()作为数组长度(如果数组长度太大的话,就不行哦)
// rewind 从头开始读
buffer.rewind(); // position 置为0 , mark 置为 -1
System.out.println((char)buffer.get()); // a
debugAll(buffer);
// get(i) 不会改变读索引的位置
System.out.println((char) buffer.get(0)); // a
debugAll(buffer);
}
}
/*输出*/
+--------+-------------------- all ------------------------+----------------+
position: [4], limit: [4]
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 61 62 63 64 00 00 00 00 00 00 |abcd...... |
+--------+-------------------------------------------------+----------------+
a
+--------+-------------------- all ------------------------+----------------+
position: [1], limit: [4]
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 61 62 63 64 00 00 00 00 00 00 |abcd...... |
+--------+-------------------------------------------------+----------------+
a
+--------+-------------------- all ------------------------+----------------+
position: [1], limit: [4]
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 61 62 63 64 00 00 00 00 00 00 |abcd...... |
+--------+-------------------------------------------------+----------------+
mark 是在读取时,做一个标记,即使 position 改变,只要调用 reset 就能回到 mark 的位置
注意:rewind 和 flip 都会清除 mark 位置(所谓的清除就是设置mark为-1)
public class TestByteBufferRead {
public static void main(String[] args) {
ByteBuffer buffer = ByteBuffer.allocate(10);
buffer.put(new byte[]{'a', 'b', 'c', 'd'});
// 切换至读模式
buffer.flip();
System.out.println((char)buffer.get()); // a
System.out.println((char)buffer.get()); // b
buffer.mark(); // 内部有一个mark标记, 此时标记索引为2
System.out.println((char)buffer.get()); // c
System.out.println((char)buffer.get()); // d
buffer.reset(); // 将position恢复到标记的位置
System.out.println((char)buffer.get()); // c
System.out.println((char)buffer.get()); // d
}
}
public class TestByteBufferString {
public static void main(String[] args) {
// 1. 字符串转为 ByteBuffer(需要调用flip方法切换至读模式, 看position & limit可知)
ByteBuffer buffer1 = ByteBuffer.allocate(16);
buffer1.put("hello".getBytes());
debugAll(buffer1);
// 2. Charset(不需要调用flip方法切换至读模式, 看position & limit可知)
ByteBuffer buffer2 = StandardCharsets.UTF_8.encode("hello");
debugAll(buffer2);
// 3. wrap
ByteBuffer buffer3 = ByteBuffer.wrap("hello".getBytes());
debugAll(buffer3);
// 4. 转为字符串
buffer1.flip(); // 需要先将buffer1切换至读模式
String str1 = StandardCharsets.UTF_8.decode(buffer1).toString(); // decode返回的是CharBuffer
System.out.println(str1);
String str2 = StandardCharsets.UTF_8.decode(buffer2).toString();
System.out.println(str2);
}
}
输出
+--------+-------------------- all ------------------------+----------------+
position: [5], limit: [16]
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 68 65 6c 6c 6f 00 00 00 00 00 00 00 00 00 00 00 |hello...........|
+--------+-------------------------------------------------+----------------+
+--------+-------------------- all ------------------------+----------------+
position: [0], limit: [5]
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 68 65 6c 6c 6f |hello |
+--------+-------------------------------------------------+----------------+
+--------+-------------------- all ------------------------+----------------+
position: [0], limit: [5]
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 68 65 6c 6c 6f |hello |
+--------+-------------------------------------------------+----------------+
hello
hello
Process finished with exit code 0
Buffer 是非线程安全的
分散读取,有一个文本文件 3parts.txt
onetwothree
使用如下方式读取,可以将数据填充至多个 buffer
try (RandomAccessFile file = new RandomAccessFile("helloword/3parts.txt", "rw")) {
FileChannel channel = file.getChannel();
ByteBuffer a = ByteBuffer.allocate(3);
ByteBuffer b = ByteBuffer.allocate(3);
ByteBuffer c = ByteBuffer.allocate(5);
channel.read(new ByteBuffer[]{a, b, c});
a.flip();
b.flip();
c.flip();
debug(a);
debug(b);
debug(c);
} catch (IOException e) {
e.printStackTrace();
}
结果
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 6f 6e 65 |one |
+--------+-------------------------------------------------+----------------+
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 74 77 6f |two |
+--------+-------------------------------------------------+----------------+
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 74 68 72 65 65 |three |
+--------+-------------------------------------------------+----------------+
使用如下方式写入,可以将多个 buffer 的数据填充至 channel
try (RandomAccessFile file = new RandomAccessFile("helloword/3parts.txt", "rw")) {
FileChannel channel = file.getChannel();
ByteBuffer d = ByteBuffer.allocate(4);
ByteBuffer e = ByteBuffer.allocate(4);
channel.position(11);
d.put(new byte[]{'f', 'o', 'u', 'r'});
e.put(new byte[]{'f', 'i', 'v', 'e'});
d.flip();
e.flip();
debug(d);
debug(e);
channel.write(new ByteBuffer[]{d, e});
} catch (IOException e) {
e.printStackTrace();
}
输出
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 66 6f 75 72 |four |
+--------+-------------------------------------------------+----------------+
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 66 69 76 65 |five |
+--------+-------------------------------------------------+----------------+
文件内容
onetwothreefourfive
public class TestGatheringWrites {
public static void main(String[] args) {
ByteBuffer b1 = StandardCharsets.UTF_8.encode("hello");
ByteBuffer b2 = StandardCharsets.UTF_8.encode("world");
ByteBuffer b3 = StandardCharsets.UTF_8.encode("你好");
try (FileChannel channel = new RandomAccessFile("words2.txt", "rw").getChannel()) {
channel.write(new ByteBuffer[]{b1, b2, b3});
} catch (IOException e) {
}
}
}
/* 生成文件内容 */
helloworld你好
网络上有多条数据发送给服务端,数据之间使用 \n 进行分隔
但由于某种原因这些数据在接收时,被进行了重新组合,例如原始数据有3条为
变成了下面的两个 byteBuffer (黏包-消息合在了一起准备发送,半包-一个消息被拆开了准备发送)
现在要求你编写程序,将错乱的数据恢复成原始的按 \n 分隔的数据
public static void main(String[] args) {
ByteBuffer source = ByteBuffer.allocate(32);
// 11 24
source.put("Hello,world\nI'm zhangsan\nHo".getBytes());
split(source);
source.put("w are you?\nhaha!\n".getBytes());
split(source);
}
private static void split(ByteBuffer source) {
source.flip();
int oldLimit = source.limit();
for (int i = 0; i < oldLimit; i++) {
if (source.get(i) == '\n') {
System.out.println(i);
ByteBuffer target = ByteBuffer.allocate(i + 1 - source.position());
// 0 ~ limit
source.limit(i + 1);
target.put(source); // 从source 读,向 target 写
debugAll(target);
source.limit(oldLimit);
}
}
source.compact();
}
public class TestByteBufferExam {
public static void main(String[] args) {
/*
网络上有多条数据发送给服务端,数据之间使用 \n 进行分隔
但由于某种原因这些数据在接收时,被进行了重新组合,例如原始数据有3条为
Hello,world\n
I'm zhangsan\n
How are you?\n
变成了下面的两个 byteBuffer (黏包,半包)
Hello,world\nI'm zhangsan\nHo
w are you?\n
现在要求你编写程序,将错乱的数据恢复成原始的按 \n 分隔的数据
*/
ByteBuffer source = ByteBuffer.allocate(32);
source.put("Hello,world\nI'm zhangsan\nHo".getBytes());
split(source);
source.put("w are you?\n".getBytes());
split(source);
}
private static void split(ByteBuffer source) {
// 切换至读模式
source.flip();
for (int i = 0; i < source.limit(); i++) {
// 找到一条完整消息
if (source.get(i) == '\n') {
// 计算消息长度
int length = i - source.position() + 1;
// 把这条完整消息存入新的 ByteBuffer
ByteBuffer target = ByteBuffer.allocate(length);
// 从 source 读,向 target 写
for (int j = 0; j < length; j++) {
// 每从source中读取到一个字节, 都写到target中
byte b = source.get(); // 注意ByteBuffer这里每次读取到一个字节, position都会向后移动一位
target.put(b);
}
debugAll(target);
}
}
source.compact(); // 注意: 这里不能用source.clear(),
// 因为不能丢弃后面未读到的字节数据, 使用compact()方法将未读到的数据整体移动到buf的最前面
}
}
FileChannel 只能工作在阻塞模式下
(什么意思?就是FileChannel不能配合Selector一起用!只有SocketChannel等网络相关的Channel可以配合Selector工作在非阻塞模式下)
不能直接打开 FileChannel,必须通过 FileInputStream、FileOutputStream 或者 RandomAccessFile 来获取 FileChannel,它们都有 getChannel 方法
会从 channel 读取数据填充 ByteBuffer,返回值表示读到了多少字节,-1 表示到达了文件的末尾
int readBytes = channel.read(buffer);
写入的正确姿势如下, SocketChannel(不能保证一次就可以将buffer中的全部数据写入到channel中,因此须检查buffer中是否还有数据,如果还有数据,则继续写。)
ByteBuffer buffer = ...;
buffer.put(...); // 存入数据
buffer.flip(); // 切换读模式
while(buffer.hasRemaining()) {
channel.write(buffer);
}
在 while 中调用 channel.write 是因为 write 方法并不能保证一次将 buffer 中的内容全部写入 channel
channel 必须关闭,不过调用了 FileInputStream、FileOutputStream 或者 RandomAccessFile 的 close 方法会间接地调用 channel 的 close 方法(可以结合try-with-resource块一起使用)
获取当前位置
long pos = channel.position();
设置当前位置
long newPos = ...;
channel.position(newPos);
设置当前位置时,如果设置为文件的末尾
使用 size 方法获取文件的大小
操作系统出于性能的考虑,会将数据缓存,不是立刻写入磁盘。可以调用 force(true) 方法将文件内容和元数据(文件的权限等信息)立刻写入磁盘
String FROM = "helloword/data.txt";
String TO = "helloword/to.txt";
long start = System.nanoTime();
try (FileChannel from = new FileInputStream(FROM).getChannel();
FileChannel to = new FileOutputStream(TO).getChannel();
) {
from.transferTo(0, from.size(), to);
} catch (IOException e) {
e.printStackTrace();
}
long end = System.nanoTime();
System.out.println("transferTo 用时:" + (end - start) / 1000_000.0);
输出
transferTo 用时:8.2011
public class TestFileChannelTransferTo {
public static void main(String[] args) {
try (
FileChannel from = new FileInputStream("data.txt").getChannel();
FileChannel to = new FileOutputStream("to.txt").getChannel();
) {
long size = from.size();
// left 变量代表还剩余多少字节
for (long left = size; left > 0; ) {
System.out.println("position:" + (size - left) + " left:" + left);
left = left - from.transferTo((size - left), left, to);
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
实际传输一个超大文件
position:0 left:7769948160
position:2147483647 left:5622464513
position:4294967294 left:3474980866
position:6442450941 left:1327497219
jdk7 引入了 Path 和 Paths 类
Path source = Paths.get("1.txt"); // 相对路径 使用 user.dir 环境变量来定位 1.txt
Path source = Paths.get("d:\\1.txt"); // 绝对路径 代表了 d:\1.txt
Path source = Paths.get("d:/1.txt"); // 绝对路径 同样代表了 d:\1.txt
Path projects = Paths.get("d:\\data", "projects"); // 代表了 d:\data\projects
.
代表了当前路径..
代表了上一级路径例如目录结构如下
d:
|- data
|- projects
|- a
|- b
代码
Path path = Paths.get("d:\\data\\projects\\a\\..\\b");
System.out.println(path);
System.out.println(path.normalize()); // 正常化路径
会输出
d:\data\projects\a\..\b
d:\data\projects\b
Path path = Paths.get("helloword/data.txt");
System.out.println(Files.exists(path));
Path path = Paths.get("helloword/d1");
Files.createDirectory(path);
Path path = Paths.get("helloword/d1/d2");
Files.createDirectories(path);
Path source = Paths.get("helloword/data.txt");
Path target = Paths.get("helloword/target.txt");
Files.copy(source, target);
如果希望用 source 覆盖掉 target,需要用 StandardCopyOption 来控制
Files.copy(source, target, StandardCopyOption.REPLACE_EXISTING);
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
public class TestFilesCopy {
public static void main(String[] args) throws IOException {
long start = System.currentTimeMillis();
String source = "D:\\Snipaste-1.16.2-x64";
String target = "D:\\Snipaste-1.16.2-x64aaa";
Files.walk(Paths.get(source)).forEach(path -> {
try {
String targetName = path.toString().replace(source, target);
// 是目录
if (Files.isDirectory(path)) {
Files.createDirectory(Paths.get(targetName));
}
// 是普通文件
else if (Files.isRegularFile(path)) {
Files.copy(path, Paths.get(targetName));
}
} catch (IOException e) {
e.printStackTrace();
}
});
long end = System.currentTimeMillis();
System.out.println(end - start);
}
}
Path source = Paths.get("helloword/data.txt");
Path target = Paths.get("helloword/data.txt");
Files.move(source, target, StandardCopyOption.ATOMIC_MOVE);
Path target = Paths.get("helloword/target.txt");
Files.delete(target);
Path target = Paths.get("helloword/d1");
Files.delete(target);
public static void main(String[] args) throws IOException {
Path path = Paths.get("C:\\Program Files\\Java\\jdk1.8.0_91");
// 不能用int
AtomicInteger dirCount = new AtomicInteger();
AtomicInteger fileCount = new AtomicInteger();
Files.walkFileTree(path, new SimpleFileVisitor<Path>(){
@Override
public FileVisitResult preVisitDirectory(Path dir,
BasicFileAttributes attrs) throws IOException {
System.out.println(dir);
dirCount.incrementAndGet();
return super.preVisitDirectory(dir, attrs);
}
@Override
public FileVisitResult visitFile(Path file,
BasicFileAttributes attrs) throws IOException {
System.out.println(file);
fileCount.incrementAndGet();
return super.visitFile(file, attrs);
}
});
System.out.println(dirCount); // 133
System.out.println(fileCount); // 1479
}
Path path = Paths.get("C:\\Program Files\\Java\\jdk1.8.0_91");
AtomicInteger fileCount = new AtomicInteger();
Files.walkFileTree(path, new SimpleFileVisitor<Path>(){
@Override
public FileVisitResult visitFile(Path file, BasicFileAttributes attrs)
throws IOException {
if (file.toFile().getName().endsWith(".jar")) {
fileCount.incrementAndGet();
}
return super.visitFile(file, attrs);
}
});
System.out.println(fileCount); // 724
Path path = Paths.get("d:\\a");
Files.walkFileTree(path, new SimpleFileVisitor<Path>(){
@Override
public FileVisitResult visitFile(Path file,
BasicFileAttributes attrs) throws IOException {
Files.delete(file);
return super.visitFile(file, attrs);
}
@Override
public FileVisitResult postVisitDirectory(Path dir,
IOException exc) throws IOException {
Files.delete(dir);
return super.postVisitDirectory(dir, exc);
}
});
删除是危险操作,确保要递归删除的文件夹没有重要内容
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
public class TestFilesCopy {
public static void main(String[] args) throws IOException {
long start = System.currentTimeMillis();
String source = "D:\\Snipaste-1.16.2-x64";
String target = "D:\\Snipaste-1.16.2-x64aaa";
Files.walk(Paths.get(source)).forEach(path -> {
try {
String targetName = path.toString().replace(source, target);
// 是目录
if (Files.isDirectory(path)) {
Files.createDirectory(Paths.get(targetName));
}
// 是普通文件
else if (Files.isRegularFile(path)) {
Files.copy(path, Paths.get(targetName));
}
} catch (IOException e) {
e.printStackTrace();
}
});
long end = System.currentTimeMillis();
System.out.println(end - start);
}
}
阻塞问题:在阻塞模式中,服务器显得有点笨,在下面的while true的循环里,1. 只有在收到新的连接请求时,accept才会继续向下运行;2. 只有在收到客户端的数据时,read才会向下运行。不然就会阻塞在accept/read那里(不会向下运行)
import lombok.extern.slf4j.Slf4j;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.ServerSocketChannel;
import java.nio.channels.SocketChannel;
import java.util.ArrayList;
import java.util.List;
import static com.zzhua.util.ByteBufferUtil.debugRead;
@Slf4j
public class Server {
public static void main(String[] args) throws Exception {
// 使用 nio 来理解阻塞模式, 单线程
// 0. ByteBuffer
ByteBuffer buffer = ByteBuffer.allocate(16);
// 1. 创建了服务器
ServerSocketChannel ssc = ServerSocketChannel.open();
// 2. 绑定监听端口
ssc.bind(new InetSocketAddress(8080));
// 3. 连接集合
List<SocketChannel> channels = new ArrayList<>();
while (true) {
// 4. accept 建立与客户端连接, SocketChannel 用来与客户端之间通信
log.debug("connecting...");
// accept是阻塞方法,线程停止运行(调用此方法, 一定要等到有连接了,线程才会向下运行)
SocketChannel sc = ssc.accept();
log.debug("connected... {}", sc);
channels.add(sc);
for (SocketChannel channel : channels) {
// 5. 接收客户端发送的数据
log.debug("before read... {}", channel);
// read是阻塞方法,线程停止运行(调用此方法, 一定要等到channel有数据可读了,线程才会向下运行)
channel.read(buffer);
buffer.flip();
debugRead(buffer);
buffer.clear();
log.debug("after read...{}", channel);
}
}
}
}
import java.net.InetSocketAddress;
import java.nio.channels.SocketChannel;
public class Client {
public static void main(String[] args) throws Exception {
SocketChannel sc = SocketChannel.open();
sc.connect(new InetSocketAddress("localhost", 8080));
System.out.println("waiting...");
}
}
设置ssc.configureBlocking(false),开启ServerSocketChannel的非阻塞模式,这样acccept方法就不会阻塞了(但是如果此时没有客户端建立连接,那么accept方法返回的就是null)
设置sc.configureBlocking(false),开启SocketChannel的非阻塞模式,这样read方法就不会阻塞了(但是如果此时客户端没有发送数据,那么返回的就是0)
解决了阻塞模式下的阻塞问题(见上面提到的阻塞问题),但这里的main线程一直处于while true循环中不断的运行,有点过劳,体现在:没有连接请求的时候,也在不断的循环,没有数据可读的时候,也在不断的循环,造成cpu资源的浪费
import lombok.extern.slf4j.Slf4j;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.ServerSocketChannel;
import java.nio.channels.SocketChannel;
import java.util.ArrayList;
import java.util.List;
import static com.zzhua.util.ByteBufferUtil.debugRead;
@Slf4j
public class Server {
public static void main(String[] args) throws Exception {
// 使用 nio 来理解非阻塞模式, 单线程
// 0. ByteBuffer
ByteBuffer buffer = ByteBuffer.allocate(16);
// 1. 创建了服务器
ServerSocketChannel ssc = ServerSocketChannel.open();
ssc.configureBlocking(false); // 非阻塞模式
// 2. 绑定监听端口
ssc.bind(new InetSocketAddress(8080));
// 3. 连接集合
List<SocketChannel> channels = new ArrayList<>();
while (true) {
// 4. accept 建立与客户端连接, SocketChannel 用来与客户端之间通信
SocketChannel sc = ssc.accept(); // 非阻塞,线程还会继续运行,如果没有连接建立,但sc是null
// (因为main线程在不断的循环接受连接,
// 当某次循环到accept时,此时正好有一个客户端请求建立连接,此时sc就不为null了)
if (sc != null) {
log.debug("connected... {}", sc);
sc.configureBlocking(false); // 非阻塞模式
channels.add(sc);
}
for (SocketChannel channel : channels) {
// 5. 接收客户端发送的数据
int read = channel.read(buffer);// 非阻塞,线程仍然会继续运行,如果没有读到数据,read 返回 0
if (read > 0) {
buffer.flip();
debugRead(buffer);
buffer.clear();
log.debug("after read...{}", channel);
}
}
}
}
}
客户端代码没有任何改变
import java.net.InetSocketAddress;
import java.nio.channels.SocketChannel;
public class Client {
public static void main(String[] args) throws Exception {
SocketChannel sc = SocketChannel.open();
sc.connect(new InetSocketAddress("localhost", 8080));
System.out.println("waiting...");
}
}
单线程可以配合 Selector 完成对多个 Channel 可读写事件的监控,这称之为多路复用
Selector内部通过维护一个个的SelectionKey去关联一个个的channel(一一对应,并且SelectionKey可以标记这个channel感兴趣的事件,事件分为accept、connect、read、write)。
当Selector调用select()方法时,当channel都没有事件时,此selector方法就会阻塞在这里,直到某一个channel发生了感兴趣的事件。当某个channel的感兴趣事件发生时,就会把此channel对应的这个selectionKey标记了是有发生感兴趣事件的,然后把SelectionKey放入到一个特定的Set集合selectedKeys中,当处理了这个channel的这个感兴趣事件时(或取消这个感兴趣事件),selectionKey上有发生感兴趣事件的标记才会清除掉。如果不处理这个channel的这个感兴趣事件(或取消这个感兴趣事件),下次继续执行selector的select()方法时,依然会把这个有感兴趣事件标记的SelectionKey添加到这个特定的Set集合selectedKeys中,直到该感兴趣事件处理为止(或取消这个感兴趣事件)。
当客户端发送一条消息过来(socketChannel.write(“Hi”)),会触发服务端socketChannel的一个可读read事件,此时需要处理这个事件(要么channel.read,要么cancel这个selectionKey),否则,如果不处理,那么下次循环到selector.select()方法时不会阻塞,selector.selectedKeys中仍然还有此selectionKey,然后又没处理,然后不断循环。
当客户端因为异常关闭时,会触发socketChannel一个可读read事件。但是,此时如果用channel.read(buf)方法读取数据会抛出异常:远程主机强迫关闭了一个现有的连接。如果只是catch住这个异常,不取消这个SelectionKey的话,会因为read方法抛出异常而没有处理这个可读事件,导致进入下一次循环遇到selector.select()方法,仍然把这个标记了发生感兴趣事件的selectionKey又给放入到了selectedKeys这个set集合当中,然后read方法又报错,又没处理,又放入到selectedKeys集合中,不断的报错中,因此需要在报错的时候,调用selectionKey.cancel()取消这个selectionKey。
当客户端因为正常关闭时(客户端调用socketChannel.close()方法),会触发服务端socketChannel的一个可读read事件。此时调用channel.read(buffer)方法会返回-1(返回-1即表示客户端正常关闭),此时就需要调用selectionKey的cancel()方法取消此key,否则,如果不取消此key,等待下次进入循环,selector.select()方法并不会阻塞,selector.selectedKeys()方法返回的集合当中仍然会有此key
@Slf4j
public class Server {
public static void main(String[] args) throws IOException {
// 1. 创建 selector, 管理多个 channel
Selector selector = Selector.open();
ServerSocketChannel ssc = ServerSocketChannel.open();
// 开启非阻塞模式(即ServerSocketChannel#accept方法不会阻塞了)
ssc.configureBlocking(false);
// 2. 建立 selector 和 channel 的联系(注册)
// SelectionKey 就是将来事件发生后,通过它可以知道事件和哪个channel的事件
// 0表示啥事件也不感兴趣
SelectionKey sscKey = ssc.register(selector, 0, null);
// key 只关注 accept 事件
sscKey.interestOps(SelectionKey.OP_ACCEPT);
log.debug("sscKey:{}", sscKey);
ssc.bind(new InetSocketAddress(8080));
while (true) {
// 3. select 方法, 没有事件发生,线程阻塞,有事件,线程才会恢复运行
// 特别注意:select 在事件未处理时,它不会阻塞(会导致当前这个while循环不断的运行)!
// 事件发生后要么处理(如:调用channel.accept()),要么取消(selectionKey.cancel()),不能置之不理
// 初步理解为: 当selector监测到有channel有事件了, 因为有selectionKey关联到这个channel,
// selector就可以收集到有事件的selectionKey, 但是如果收集了selectionKey后,不处理掉这些selectionKey的对应事件,
// 这个selectionKey就一直存在, 这样Selector的select方法就不会阻塞(不会阻塞的意思是:代码会继续向下运行),
// 注意,只是简单的从selectedKeys集合中移除该selectionKey是没用的(selector.select()方法不会阻塞),
// 还需要处理掉channel里发生的事件
selector.select();
// 4. 处理事件, selectedKeys 内部包含了所有发生的事件
// (拿迭代器的目的是为了方便在遍历集合的时候删除集合中的元素)
// (所有发生的事件指的是:凡是有发生事件的SelectionKey,因为在前面SelectionKey关注了感兴趣的事件)
// (channel注册到了Selector就会得到一个SelectionKey,Selector能监测到它所管理的所有channel的所发生的事件,
// 当有channel发生了事件时,Selector就会把对应的SelectionKey添加到到集合selectedKeys中(这是个set集合)。
// 并且, 这个selectedKeys集合中不会主动删除这些key,我们每处理完一个selectionKey,
// 应当自己把这个selectionKey给移除掉,否则下次循环获得的集合中仍然有这个selectionKey)
Iterator<SelectionKey> iter = selector.selectedKeys().iterator(); // accept, read
while (iter.hasNext()) {
SelectionKey key = iter.next();
// 处理key 时,要从 selectedKeys 集合中删除,否则下次处理就会有问题
// (因为不移除的话, 下次外部的循环从selector中得到的selectedKeys还是同一个set集合,
// 这个遍历的元素下次还会在这个set集合中, 但实际这个selectionKey的事件已经被处理过了)
iter.remove();
log.debug("key: {}", key);
// 5. 区分事件类型
if (key.isAcceptable()) { // 如果是 accept
// 拿到触发事件的channel
ServerSocketChannel channel = (ServerSocketChannel) key.channel();
// 当发生连接事件时, 再去调用accept方法(而不是非阻塞示例中的一直不断的循环接受连接)
// (处理连接事件,会去标记对应的selectionKey的该事件已经处理了,但不会主动从selectedKeys集合中删除该selectionKey - 发生事件后,不能置之不理)
// (当channel设置为非阻塞模式时,当没有客户端连接建立时,调用此accept方法会返回null)
SocketChannel sc = channel.accept();
// Selector必须配合SocketChannel的非阻塞模式(即SocketChannel#read方法不会阻塞了)
sc.configureBlocking(false);
// Selector是可以管理多个Channel的,因此,将此channel注册到selector中
// (返回的key表示,该channel上有感兴趣的事件了, 就能从key中反映出来)
SelectionKey scKey = sc.register(selector, 0, null);
// 关注 读事件
scKey.interestOps(SelectionKey.OP_READ);
log.debug("{}", sc);
log.debug("scKey:{}", scKey);
} else if (key.isReadable()) { // 如果是 read
try {
// 拿到触发事件的channel
SocketChannel channel = (SocketChannel) key.channel();
ByteBuffer buffer = ByteBuffer.allocate(4);
// (处理可读事件 - 发生事件后,不能置之不理)
int read = channel.read(buffer); // 如果是正常断开,read 的方法的返回值是 -1
if(read == -1) { // 判断客户端是否是正常断开(当客户端正常断开时,也会触发一个可读事件)
// 因为客户端因正常断开了(客户端调用socketChannel.close()),
// 因此需要将 key 取消(从 selector 的 keys 集合中真正删除 key).
// 如果此处不取消这个selectionKey的话,selector的select方法不会阻塞,
// 而让selector.selectedKeys()再次获取到此selectionKey,而一直不断的循环
key.cancel();
} else {
buffer.flip();
// debugAll(buffer);
System.out.println(Charset.defaultCharset().decode(buffer));
}
} catch (IOException e) {
e.printStackTrace();
// (注意:客户端的异常关闭,会引发channel的read读事件,
// 但是此时,若调用channel.read(buffer),会抛出异常:远程主机强迫关闭了一个现有的连接,
// 此时,read方法并没有执行成功(它报错抛异常了),这个selectionKey的读事件未处理,
// 如果不取消这个selectionKey的话,下次循环到selector.select方法仍然把这个selectionKey
// 放入到selectedKeys集合当中,而导致又一次调用此早已断开连接的channel.read()方法而再次报错,
// 然后不断的报错循环当中)
// 因为客户端因为异常断开了,因此需要将 key 取消(从 selector 的 keys 集合中真正删除 key)
key.cancel();
}
}
}
}
}
}
public class Client {
public static void main(String[] args) throws Exception {
SocketChannel sc = SocketChannel.open();
sc.connect(new InetSocketAddress("localhost", 8080));
System.out.println("waiting...");
}
}
好处
Selector selector = Selector.open();
也称之为注册事件,绑定的事件 selector 才会关心
channel.configureBlocking(false);
SelectionKey key = channel.register(selector, 绑定事件);
可以通过下面三种方法来监听是否有事件发生,方法的返回值代表有多少 channel 发生了事件
方法1,阻塞直到绑定事件发生
int count = selector.select();
方法2,阻塞直到绑定事件发生,或是超时(时间单位为 ms)
int count = selector.select(long timeout);
方法3,不会阻塞,也就是不管有没有事件,立刻返回,自己根据返回值检查是否有事件
int count = selector.selectNow();
- 事件发生时
- 客户端发起连接请求,会触发 accept 事件
- 客户端发送数据过来,客户端正常、异常关闭时,都会触发 read 事件,另外如果发送的数据大于 buffer 缓冲区,会触发多次读取事件
- channel 可写,会触发 write 事件
- 在 linux 下 nio bug 发生时
- 调用 selector.wakeup()
- 调用 selector.close()
- selector 所在线程 interrupt
客户端代码为
public class Client {
public static void main(String[] args) {
try (Socket socket = new Socket("localhost", 8080)) {
System.out.println(socket);
socket.getOutputStream().write("world".getBytes());
System.in.read();
} catch (IOException e) {
e.printStackTrace();
}
}
}
服务器端代码为
@Slf4j
public class ChannelDemo6 {
public static void main(String[] args) {
try (ServerSocketChannel channel = ServerSocketChannel.open()) {
channel.bind(new InetSocketAddress(8080));
System.out.println(channel);
Selector selector = Selector.open();
channel.configureBlocking(false);
channel.register(selector, SelectionKey.OP_ACCEPT);
while (true) {
int count = selector.select();
// int count = selector.selectNow();
log.debug("select count: {}", count);
// if(count <= 0) {
// continue;
// }
// 获取所有事件
Set<SelectionKey> keys = selector.selectedKeys();
// 遍历所有事件,逐一处理
Iterator<SelectionKey> iter = keys.iterator();
while (iter.hasNext()) {
SelectionKey key = iter.next();
// 判断事件类型
if (key.isAcceptable()) {
ServerSocketChannel c = (ServerSocketChannel) key.channel();
// 必须处理
SocketChannel sc = c.accept();
log.debug("{}", sc);
}
// 处理完毕,必须将事件移除
iter.remove();
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
事件发生后,要么处理,要么取消(cancel),不能什么都不做,否则下次该事件仍会触发,这是因为 nio 底层使用的是水平触发
@Slf4j
public class ChannelDemo6 {
public static void main(String[] args) {
try (ServerSocketChannel channel = ServerSocketChannel.open()) {
channel.bind(new InetSocketAddress(8080));
System.out.println(channel);
Selector selector = Selector.open();
channel.configureBlocking(false);
channel.register(selector, SelectionKey.OP_ACCEPT);
while (true) {
int count = selector.select();
// int count = selector.selectNow();
log.debug("select count: {}", count);
// if(count <= 0) {
// continue;
// }
// 获取所有事件
Set<SelectionKey> keys = selector.selectedKeys();
// 遍历所有事件,逐一处理
Iterator<SelectionKey> iter = keys.iterator();
while (iter.hasNext()) {
SelectionKey key = iter.next();
// 判断事件类型
if (key.isAcceptable()) {
ServerSocketChannel c = (ServerSocketChannel) key.channel();
// 必须处理
SocketChannel sc = c.accept();
sc.configureBlocking(false);
sc.register(selector, SelectionKey.OP_READ);
log.debug("连接已建立: {}", sc);
} else if (key.isReadable()) {
SocketChannel sc = (SocketChannel) key.channel();
ByteBuffer buffer = ByteBuffer.allocate(128);
int read = sc.read(buffer);
if(read == -1) {
key.cancel();
sc.close();
} else {
buffer.flip();
debug(buffer);
}
}
// 处理完毕,必须将事件移除
iter.remove();
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
开启两个客户端,修改一下发送文字,输出
sun.nio.ch.ServerSocketChannelImpl[/0:0:0:0:0:0:0:0:8080]
21:16:39 [DEBUG] [main] c.i.n.ChannelDemo6 - select count: 1
21:16:39 [DEBUG] [main] c.i.n.ChannelDemo6 - 连接已建立: java.nio.channels.SocketChannel[connected local=/127.0.0.1:8080 remote=/127.0.0.1:60367]
21:16:39 [DEBUG] [main] c.i.n.ChannelDemo6 - select count: 1
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 68 65 6c 6c 6f |hello |
+--------+-------------------------------------------------+----------------+
21:16:59 [DEBUG] [main] c.i.n.ChannelDemo6 - select count: 1
21:16:59 [DEBUG] [main] c.i.n.ChannelDemo6 - 连接已建立: java.nio.channels.SocketChannel[connected local=/127.0.0.1:8080 remote=/127.0.0.1:60378]
21:16:59 [DEBUG] [main] c.i.n.ChannelDemo6 - select count: 1
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 77 6f 72 6c 64 |world |
+--------+-------------------------------------------------+----------------+
因为 select 在事件发生后,就会将相关的 key 放入 selectedKeys 集合,但不会在处理完后从 selectedKeys 集合中移除,需要我们自己编码删除。例如
- 第一次触发了 ssckey 上的 accept 事件,没有移除 ssckey
- 第二次触发了 sckey 上的 read 事件,但这时 selectedKeys 中还有上次的 ssckey ,在处理时因为没有真正的 serverSocket 连上了,就会导致空指针异常
cancel 会取消注册在 selector 上的 channel,并从 keys 集合中删除 key 后续不会再监听事件
以前有同学写过这样的代码,思考注释中两个问题,以 bio 为例,其实 nio 道理是一样的
public class Server {
public static void main(String[] args) throws IOException {
ServerSocket ss=new ServerSocket(9000);
while (true) {
Socket s = ss.accept();
InputStream in = s.getInputStream();
// 这里这么写,有没有问题
byte[] arr = new byte[4];
while(true) {
int read = in.read(arr);
// 这里这么写,有没有问题
if(read == -1) {
break;
}
System.out.println(new String(arr, 0, read));
}
}
}
}
客户端
public class Client {
public static void main(String[] args) throws IOException {
Socket max = new Socket("localhost", 9000);
OutputStream out = max.getOutputStream();
out.write("hello".getBytes());
out.write("world".getBytes());
out.write("你好".getBytes());
max.close();
}
}
输出
hell
owor
ld�
�好
为什么?
服务端代码如下(注意ByteBuffer分配的大小是4个字节),假设客户端连接到服务端之后,通过socketChannel.write(“中国”)发送了6个字节过来,就会出现乱码的现象,并且我们注意到在客户端将这个消息发过来后,下面这个while(true)的循环执行了2次,既然它循环了2次也就是说当消息没有一次处理完的话,这个循环会再来执行一遍(会把剩下的消息内容发过来-理解:channel把消息内容读取到byteBuffer中,channel中的消息一次并没有读完,selector.select()不会阻塞,进入下一次循环,继续从channel中读取剩余的消息),直到所有消息处理完毕。
@Slf4j
public class Server {
public static void main(String[] args) throws Exception {
Selector selector = Selector.open();
ServerSocketChannel serverSocketChannel = ServerSocketChannel.open();
serverSocketChannel.configureBlocking(false);
SelectionKey sscSelectionKey = serverSocketChannel.register(selector, 0, null);
log.info("sscSelectionKey: {}", sscSelectionKey);
sscSelectionKey.interestOps(SelectionKey.OP_ACCEPT);
serverSocketChannel.bind(new InetSocketAddress(8080));
Set<SelectionKey> skSet = selector.selectedKeys();
while (true) {
selector.select();
log.info("select...");
Set<SelectionKey> selectedKeys = selector.selectedKeys();
if (skSet == selectedKeys) {
log.info("是同一个set集合, {}, {}"); // 证明了是同一个集合
} else {
log.info("不是同一个set集合");
}
log.info("selectedKeys: {}, hash: {}" , selectedKeys, selectedKeys.hashCode());
Iterator<SelectionKey> iterator = selectedKeys.iterator();
while (iterator.hasNext()) {
SelectionKey selectionKey = iterator.next();
iterator.remove();
if (selectionKey.isAcceptable()) {
ServerSocketChannel ssChannel = (ServerSocketChannel) selectionKey.channel();
SocketChannel socketChannel = ssChannel.accept();
socketChannel.configureBlocking(false);
SelectionKey sk = socketChannel.register(selector, SelectionKey.OP_READ);
log.info("注册socketChannel: {}", socketChannel);
log.info("注册selectionKey: {}", sk);
} else if (selectionKey.isReadable()) {
try {
SocketChannel socketChannel = (SocketChannel) selectionKey.channel();
ByteBuffer buf = ByteBuffer.allocate(4);
int read = socketChannel.read(buf);
if (read == -1) {
selectionKey.cancel();
} else {
buf.flip();
System.out.println(StandardCharsets.UTF_8.decode(buf).toString());
}
} catch (IOException e) {
log.error("发生异常: {}", e);
selectionKey.cancel();
}
}
}
}
}
}
public class Client {
public static void main(String[] args) throws Exception {
SocketChannel sc = SocketChannel.open();
sc.connect(new InetSocketAddress("localhost", 8080));
System.out.println("waiting...");
}
}
当客户端发送消息:sc.write(Charset.defaultCharset().encode(“中国”))。如下,发现服务端接收消息乱码了
21:06:38 [INFO ] [main] c.z.nio.c4.Server - select...
21:06:38 [INFO ] [main] c.z.nio.c4.Server - 是同一个set集合, {}, {}
21:06:38 [INFO ] [main] c.z.nio.c4.Server - selectedKeys: [sun.nio.ch.SelectionKeyImpl@1e643faf], hash: 509886383
中�
21:06:38 [INFO ] [main] c.z.nio.c4.Server - select...
21:06:38 [INFO ] [main] c.z.nio.c4.Server - 是同一个set集合, {}, {}
21:06:38 [INFO ] [main] c.z.nio.c4.Server - selectedKeys: [sun.nio.ch.SelectionKeyImpl@1e643faf], hash: 509886383
��
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-dm8whn01-1690207745258)(assets/0023.png)]
private static void split(ByteBuffer source) {
// 切换为读模式
source.flip();
for (int i = 0; i < source.limit(); i++) {
// 找到一条完整消息
if (source.get(i) == '\n') {
int length = i - source.position() + 1 ;
// 把这条完整消息存入新的 ByteBuffer
// 1. 先创建一个指定长度 新的ByteBuffer => target, 用于存放一条完整的消息
ByteBuffer target = ByteBuffer.allocate(length);
// 2. 从 source 读,向 target 写
for (int j = 0; j < length; j++) {
// 从source中每读到一个字节(source的position向后移动一位),就把这个字节写到target中
target.put(source.get());
}
debugAll(target);
}
}
// compact方法会:将position设置为remaining()(这是为了方便从末尾开始写入),将limit设置为capacity().
// 并且会把后面的数据拷贝到最前面(相当于整体移动数据位置)
source.compact(); // 0123456789abcdef position 16 limit 16
}
public static void main(String[] args) throws IOException {
// 1. 创建 selector, 管理多个 channel
Selector selector = Selector.open();
ServerSocketChannel ssc = ServerSocketChannel.open();
ssc.configureBlocking(false);
// 2. 建立 selector 和 channel 的联系(注册)
// SelectionKey 就是将来事件发生后,通过它可以知道事件和哪个channel的事件
SelectionKey sscKey = ssc.register(selector, 0, null);
// key 只关注 accept 事件
sscKey.interestOps(SelectionKey.OP_ACCEPT);
log.debug("sscKey:{}", sscKey);
ssc.bind(new InetSocketAddress(8080));
while (true) {
// 3. select 方法, 没有事件发生,线程阻塞,有事件,线程才会恢复运行
// select 在事件未处理时,它不会阻塞, 事件发生后要么处理,要么取消,不能置之不理
selector.select();
log.info("select...");
// 4. 处理事件, selectedKeys 内部包含了所有发生的事件
Iterator<SelectionKey> iter = selector.selectedKeys().iterator(); // accept, read
while (iter.hasNext()) {
SelectionKey key = iter.next();
// 处理key 时,要从 selectedKeys 集合中删除,否则下次处理就会有问题
iter.remove();
log.debug("key: {}", key);
// 5. 区分事件类型
if (key.isAcceptable()) { // 如果是 accept
ServerSocketChannel channel = (ServerSocketChannel) key.channel();
SocketChannel sc = channel.accept();
sc.configureBlocking(false);
ByteBuffer buffer = ByteBuffer.allocate(16); // attachment
// 将一个 byteBuffer 作为附件关联到 selectionKey 上
//(这里会保证socketChannel-selectionKey-附件buffer 这三者之间一一对应的关系)
SelectionKey scKey = sc.register(selector, 0, buffer);
scKey.interestOps(SelectionKey.OP_READ);
log.debug("{}", sc);
log.debug("scKey:{}", scKey);
} else if (key.isReadable()) { // 如果是 read
try {
SocketChannel channel = (SocketChannel) key.channel(); // 拿到触发事件的channel
// 获取 selectionKey 上关联的附件(一开始注册时,所关联的附件)
ByteBuffer buffer = (ByteBuffer) key.attachment();
int read = channel.read(buffer); // 如果是正常断开,read 的方法的返回值是 -1
log.info("读取了: {} 个字节", read);
if(read == -1) {
key.cancel();
} else {
// 调用上面的split方法
split(buffer);
// 需要扩容
// (这里的条件可以理解为:buffer的容量已经满了,因为buffer的position都等于limit了)
if (buffer.position() == buffer.limit()) {
log.info("扩容...");
// 扩容为原来的2倍
ByteBuffer newBuffer = ByteBuffer.allocate(buffer.capacity() * 2);
// 切换为读模式(在下面的拷贝操作之前先切换为读模式)
buffer.flip();
// 将buffer原来的数据 拷贝到 扩容后的新的buffer中
// (此方法调用同:while (src.hasRemaining()) dst.put(src.get());)
// (此方法调用同:for (int i = 0; i < src.remaining(); i++) put(src.get());)
newBuffer.put(buffer); // 0123456789abcdef3333\n
key.attach(newBuffer); // 关联一个新的附件(替换掉原来的附件,如果原来的附件存在的话)
}
}
} catch (IOException e) {
e.printStackTrace();
// 因为客户端断开了,因此需要将 key 取消(从 selector 的 keys 集合中真正删除key)
key.cancel();
}
}
}
}
}
SocketChannel sc = SocketChannel.open();
sc.connect(new InetSocketAddress("localhost", 8080));
SocketAddress address = sc.getLocalAddress();
// sc.write(Charset.defaultCharset().encode("hello\nworld\n"));
// 一次性写2条消息过去
sc.write(Charset.defaultCharset().encode("0123\n456789abcdef"));
sc.write(Charset.defaultCharset().encode("0123456789abcdef3333\n"));
System.in.read();
要仔细看下面的日志输出,才能明白消息处理的过程!
1.channel中发过来的数据没有一次读完,下次循环中selector.select()方法不会阻塞,下会继续把数据发送过来
2.需要注意先明白split方法的作用:如果传入的byteBuffer中有分隔符(\n),那么就会读取\n前面的数据(包括\n),然后把剩下的数据往前挪动,如果split方法调用完成后,此时的position还等于limit的话,说明根本就没有挪并且容量已经满了,那么就需要扩容了。
3.这里的扩容仅仅是简单的扩大为原来的2倍,netty对这方面做了优化,可动态调整
4.对于每一个channel来说,这个扩容后的byteBuffer应该要被同一channel的共享到,因此以附件的形式关联到selectionKey,而selectionKey关联了channel
23:28:53 [DEBUG] [main] c.z.nio.c5.Server - sscKey:sun.nio.ch.SelectionKeyImpl@2c8d66b2
23:28:58 [INFO ] [main] c.z.nio.c5.Server - select...
23:28:58 [DEBUG] [main] c.z.nio.c5.Server - key: sun.nio.ch.SelectionKeyImpl@2c8d66b2
23:28:58 [DEBUG] [main] c.z.nio.c5.Server - java.nio.channels.SocketChannel[connected local=/127.0.0.1:8080 remote=/127.0.0.1:65015]
23:28:58 [DEBUG] [main] c.z.nio.c5.Server - scKey:sun.nio.ch.SelectionKeyImpl@7c30a502
23:28:58 [INFO ] [main] c.z.nio.c5.Server - select...
23:28:58 [DEBUG] [main] c.z.nio.c5.Server - key: sun.nio.ch.SelectionKeyImpl@7c30a502
23:28:58 [INFO ] [main] c.z.nio.c5.Server - 读取了: 16 个字节
+--------+-------------------- all ------------------------+----------------+
position: [5], limit: [5]
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 30 31 32 33 0a |0123. |
+--------+-------------------------------------------------+----------------+
23:28:58 [INFO ] [main] c.z.nio.c5.Server - select...
23:28:58 [DEBUG] [main] c.z.nio.c5.Server - key: sun.nio.ch.SelectionKeyImpl@7c30a502
23:28:58 [INFO ] [main] c.z.nio.c5.Server - 读取了: 5 个字节
23:28:58 [INFO ] [main] c.z.nio.c5.Server - 扩容...
23:28:58 [INFO ] [main] c.z.nio.c5.Server - select...
23:28:58 [DEBUG] [main] c.z.nio.c5.Server - key: sun.nio.ch.SelectionKeyImpl@7c30a502
23:28:58 [INFO ] [main] c.z.nio.c5.Server - 读取了: 16 个字节
23:28:58 [INFO ] [main] c.z.nio.c5.Server - 扩容...
23:28:58 [INFO ] [main] c.z.nio.c5.Server - select...
23:28:58 [DEBUG] [main] c.z.nio.c5.Server - key: sun.nio.ch.SelectionKeyImpl@7c30a502
23:28:58 [INFO ] [main] c.z.nio.c5.Server - 读取了: 1 个字节
+--------+-------------------- all ------------------------+----------------+
position: [33], limit: [33]
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 34 35 36 37 38 39 61 62 63 64 65 66 30 31 32 33 |456789abcdef0123|
|00000010| 34 35 36 37 38 39 61 62 63 64 65 66 33 33 33 33 |456789abcdef3333|
|00000020| 0a |. |
+--------+-------------------------------------------------+----------------+
public class WriteServer {
public static void main(String[] args) throws IOException {
ServerSocketChannel ssc = ServerSocketChannel.open();
ssc.configureBlocking(false);
ssc.bind(new InetSocketAddress(8080));
Selector selector = Selector.open();
ssc.register(selector, SelectionKey.OP_ACCEPT);
while(true) {
selector.select();
Iterator<SelectionKey> iter = selector.selectedKeys().iterator();
while (iter.hasNext()) {
SelectionKey key = iter.next();
iter.remove();
if (key.isAcceptable()) {
SocketChannel sc = ssc.accept();
sc.configureBlocking(false);
SelectionKey sckey = sc.register(selector, SelectionKey.OP_READ);
// 1. 向客户端发送内容
StringBuilder sb = new StringBuilder();
for (int i = 0; i < 3000000; i++) {
sb.append("a");
}
ByteBuffer buffer = Charset.defaultCharset().encode(sb.toString());
int write = sc.write(buffer);
// 3. write 表示实际写了多少字节
System.out.println("实际写入字节:" + write);
// 4. 如果有剩余未读字节,才需要关注写事件
if (buffer.hasRemaining()) {
// read 1 write 4
// 在原有关注事件的基础上,多关注 写事件
sckey.interestOps(sckey.interestOps() + SelectionKey.OP_WRITE);
// 把 buffer 作为附件加入 sckey
sckey.attach(buffer);
}
} else if (key.isWritable()) {
ByteBuffer buffer = (ByteBuffer) key.attachment();
SocketChannel sc = (SocketChannel) key.channel();
int write = sc.write(buffer);
System.out.println("实际写入字节:" + write);
if (!buffer.hasRemaining()) { // 写完了
key.interestOps(key.interestOps() - SelectionKey.OP_WRITE);
key.attach(null);
}
}
}
}
}
}
客户端
public class WriteClient {
public static void main(String[] args) throws IOException {
Selector selector = Selector.open();
SocketChannel sc = SocketChannel.open();
sc.configureBlocking(false);
sc.register(selector, SelectionKey.OP_CONNECT | SelectionKey.OP_READ);
sc.connect(new InetSocketAddress("localhost", 8080));
int count = 0;
while (true) {
selector.select();
Iterator<SelectionKey> iter = selector.selectedKeys().iterator();
while (iter.hasNext()) {
SelectionKey key = iter.next();
iter.remove();
if (key.isConnectable()) {
System.out.println(sc.finishConnect());
} else if (key.isReadable()) {
ByteBuffer buffer = ByteBuffer.allocate(1024 * 1024);
count += sc.read(buffer);
buffer.clear();
System.out.println(count);
}
}
}
}
}
@Slf4j
public class WriteServer {
public static void main(String[] args) throws Exception {
Selector selector = Selector.open();
ServerSocketChannel ssc = ServerSocketChannel.open();
ssc.configureBlocking(false);
ssc.bind(new InetSocketAddress(8080));
ssc.register(selector, SelectionKey.OP_ACCEPT);
while (true) {
selector.select();
log.info("select...");
Set<SelectionKey> selectedKeys = selector.selectedKeys();
Iterator<SelectionKey> it = selectedKeys.iterator();
while (it.hasNext()) {
SelectionKey selectionKey = it.next();
it.remove();
if (selectionKey.isAcceptable()) {
SocketChannel socketChannel = ssc.accept();
socketChannel.configureBlocking(false);
StringBuilder sb = new StringBuilder();
for (int i = 0; i < 30000000; i++) {
sb.append("a");
}
ByteBuffer buffer = Charset.defaultCharset().encode(sb.toString());
// 1. 发送大量数据(socketChannel#write方法不能保证一次就能将buffer中的数据都写给客户端)
while (buffer.hasRemaining()) {
// 2. 返回值表示实际写入的字节数
int writeCount = socketChannel.write(buffer);
log.info("写入了: {} 个字节", writeCount);
}
}
}
}
}
}
public class WriteClient {
public static void main(String[] args) throws Exception {
SocketChannel socketChannel = SocketChannel.open();
socketChannel.connect(new InetSocketAddress(8080));
ByteBuffer buffer = ByteBuffer.allocate(1024 * 1024);// 1M
int count = 0;
while (true) {
int readCount = socketChannel.read(buffer);
count = count + readCount;
log.info("readCount:{}, count: {}", readCount, count);
buffer.clear();
}
}
}
/* 服务端日志输出 */
14:35:06 [INFO ] [main] c.z.n.c.WriteServer - select...
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 3014633 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 19267437 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 2621420 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 1441781 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 1179639 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 0 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 1441781 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 0 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 0 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 0 个字节
14:35:07 [INFO ] [main] c.z.n.c.WriteServer - 写入了: 1033309 个字节
/* 客户端日志输出 */
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 131071
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 262142
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 393213
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 524284
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 655355
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 786426
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 917497
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1048568
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1179639
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1310710
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1441781
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1572852
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1703923
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1834994
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1966065
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2097136
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2228207
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2359278
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2490349
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2621420
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2752491
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2883562
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3014633
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3145704
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3276775
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3407846
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3538917
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3669988
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3801059
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3932130
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4063201
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4194272
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4325343
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4456414
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4587485
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4718556
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4849627
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4980698
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5111769
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5242840
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5373911
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5504982
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5636053
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5767124
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5898195
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6029266
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6160337
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6291408
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6422479
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6553550
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6684621
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6815692
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6946763
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7077834
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7208905
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7339976
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7471047
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7602118
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7733189
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7864260
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7995331
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8126402
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8257473
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8388544
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8519615
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8650686
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8781757
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8912828
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9043899
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9174970
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9306041
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9437112
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9568183
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9699254
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9830325
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9961396
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10092467
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10223538
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10354609
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10485680
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10616751
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10747822
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10878893
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11009964
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11141035
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11272106
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11403177
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11534248
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11665319
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11796390
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11927461
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12058532
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12189603
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12320674
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12451745
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12582816
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12713887
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12844958
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12976029
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13107100
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13238171
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13369242
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13500313
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13631384
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13762455
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13893526
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14024597
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14155668
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14286739
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14417810
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14548881
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14679952
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14811023
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14942094
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15073165
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15204236
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15335307
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15466378
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15597449
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15728520
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15859591
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15990662
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16121733
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16252804
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16383875
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16514946
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16646017
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16777088
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16908159
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17039230
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17170301
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17301372
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17432443
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17563514
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17694585
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17825656
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17956727
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18087798
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18218869
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18349940
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18481011
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18612082
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18743153
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18874224
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19005295
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19136366
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19267437
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19398508
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19529579
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19660650
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19791721
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19922792
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20053863
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20184934
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20316005
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20447076
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20578147
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20709218
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20840289
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20971360
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21102431
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21233502
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21364573
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21495644
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21626715
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21757786
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21888857
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22019928
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22150999
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22282070
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22413141
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22544212
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22675283
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22806354
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22937425
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23068496
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23199567
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23330638
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23461709
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23592780
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23723851
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23854922
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23985993
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24117064
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24248135
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24379206
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24510277
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24641348
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24772419
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24903490
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25034561
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25165632
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25296703
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25427774
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25558845
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25689916
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25820987
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25952058
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26083129
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26214200
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26345271
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26476342
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26607413
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26738484
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26869555
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27000626
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27131697
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27262768
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27393839
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27524910
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27655981
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27787052
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27918123
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28049194
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28180265
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28311336
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28442407
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28573478
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28704549
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28835620
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28966691
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29097762
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29228833
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29359904
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29490975
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29622046
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29753117
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29884188
14:35:07 [INFO ] [main] c.z.n.c.WriteClient - readCount:115812, count: 30000000
针对上面课程示例中存在的问题,下面通过关注可写事件,来解决。
@Slf4j
public class WriteServer {
public static void main(String[] args) throws Exception {
Selector selector = Selector.open();
ServerSocketChannel ssc = ServerSocketChannel.open();
ssc.configureBlocking(false);
ssc.bind(new InetSocketAddress(8080));
ssc.register(selector, SelectionKey.OP_ACCEPT);
while (true) {
selector.select();
log.info("select...");
Set<SelectionKey> selectedKeys = selector.selectedKeys();
Iterator<SelectionKey> it = selectedKeys.iterator();
while (it.hasNext()) {
SelectionKey selectionKey = it.next();
it.remove();
if (selectionKey.isAcceptable()) {
SocketChannel socketChannel = ssc.accept();
socketChannel.configureBlocking(false);
// (下面用scKey哦)
SelectionKey scKey = socketChannel.register(selector, SelectionKey.OP_READ);
// 1. 发送大量数据
StringBuilder sb = new StringBuilder();
for (int i = 0; i < 30000000; i++) {
sb.append("a");
}
ByteBuffer buffer = Charset.defaultCharset().encode(sb.toString());
// 2. 返回的值代表实际写入的字节数
int writeCount = socketChannel.write(buffer);
log.info("初次写入字节数量: {}", writeCount);
// 3. 判断是否还有剩余内容
if (buffer.hasRemaining()) {
// 4. 关注可写事件(当发送网络包的缓冲区可以接收内容的时候,就会触发可写事件)
// (目的:就是一次写不完的数据,不要一直在这里通过while循环使劲的写,
// 而是通过关注可写事件的方式,等触发了可写的时候,再去写剩余的内容)
// (不改变原来的关注事件,添加上关注可写事件)
// scKey.interestOps(scKey.interestOps() | scKey.OP_WRITE); // 使用位运算亦可
scKey.interestOps(scKey.interestOps() + scKey.OP_WRITE);
// 5. 把未写完的数据挂到selectionKey的附件上
scKey.attach(buffer);
}
} else if (selectionKey.isWritable()) { // 关注了可写事件,当可写事件发生时触发
// 拿到附件
ByteBuffer buffer = (ByteBuffer) selectionKey.attachment();
SocketChannel socketChannel = (SocketChannel) selectionKey.channel();
int writeCount = socketChannel.write(buffer);
log.info("通过关注可写事件writable 写入: {}", writeCount);
// 6. 清理操作
if (!buffer.hasRemaining()) {
// 需要清除buffer(关联null,覆盖掉原来的buffer)
selectionKey.attach(null);
// 不需要关注可写事件了
selectionKey.interestOps(selectionKey.interestOps() - SelectionKey.OP_WRITE);
log.info("全部写完!");
}
}
}
}
}
}
没有代码变动
@Slf4j
public class WriteClient {
public static void main(String[] args) throws Exception {
SocketChannel socketChannel = SocketChannel.open();
socketChannel.connect(new InetSocketAddress(8080));
ByteBuffer buffer = ByteBuffer.allocate(1024 * 1024);// 1M
int count = 0;
while (true) {
int readCount = socketChannel.read(buffer);
count = count + readCount;
log.info("readCount:{}, count: {}", readCount, count);
buffer.clear();
}
}
}
/* 服务端日志输出 */
15:12:37 [INFO ] [main] c.z.n.c.WriteServer - select...
15:12:37 [INFO ] [main] c.z.n.c.WriteServer - 初次写入字节数量: 3014633
15:12:37 [INFO ] [main] c.z.n.c.WriteServer - select...
15:12:37 [INFO ] [main] c.z.n.c.WriteServer - 通过关注可写事件writable 写入: 13369242
15:12:37 [INFO ] [main] c.z.n.c.WriteServer - select...
15:12:37 [INFO ] [main] c.z.n.c.WriteServer - 通过关注可写事件writable 写入: 13616125
15:12:37 [INFO ] [main] c.z.n.c.WriteServer - 全部写完!
/* 客户端日志输出 */
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 131071
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 262142
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 393213
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 524284
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 655355
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 786426
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 917497
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1048568
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1179639
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1310710
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1441781
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1572852
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1703923
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1834994
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 1966065
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2097136
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2228207
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2359278
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2490349
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2621420
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2752491
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 2883562
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3014633
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3145704
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3276775
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3407846
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3538917
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3669988
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3801059
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 3932130
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4063201
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4194272
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4325343
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4456414
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4587485
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4718556
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 4849627
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 4915110
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 4980698
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5111769
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5242840
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5373911
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5504982
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5636053
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5767124
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 5898195
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6029266
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6160337
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6291408
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6422479
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 6487962
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 6553550
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6684621
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6815692
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 6946763
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7077834
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7208905
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7339976
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7471047
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7602118
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7733189
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7864260
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 7995331
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8126402
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8257473
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8388544
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8519615
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8650686
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8781757
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 8912828
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9043899
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9174970
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9306041
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9437112
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9568183
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9699254
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9830325
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 9961396
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10092467
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10223538
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10354609
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 10420092
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 10485680
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10616751
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 10747822
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 10813305
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 10878893
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11009964
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:130966, count: 11140930
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11272001
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:105, count: 11272106
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11403177
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11534248
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11665319
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11796390
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 11861873
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 11992944
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 12058532
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 12124015
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 12189603
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12320674
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:130966, count: 12451640
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 12517228
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12648299
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12779370
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 12910441
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13041512
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13172583
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13303654
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13434725
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13565796
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13696867
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13827938
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 13959009
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14090080
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14221151
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14352222
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14483293
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14614364
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14745435
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 14876506
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15007577
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15138648
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15269719
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15400790
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15531861
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15662932
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15794003
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 15925074
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16056145
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16187216
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16318287
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16449358
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16580429
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16711500
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16842571
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 16973642
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17104713
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17235784
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17366855
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17497926
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17628997
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17760068
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 17891139
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18022210
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18153281
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18284352
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18415423
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18546494
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18677565
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18808636
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 18939707
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19070778
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19201849
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19332920
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19463991
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19595062
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19726133
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19857204
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 19988275
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 20053863
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 20119346
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 20184934
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20316005
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20447076
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 20578147
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 20643630
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 20709218
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 20774701
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 20840289
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 20905772
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 20971360
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 21036843
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 21102431
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:130966, count: 21233397
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:105, count: 21233502
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21364573
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21495644
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 21626715
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 21692198
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 21757786
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 21823269
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 21888857
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 21954340
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 22019928
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 22085411
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 22150999
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22282070
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22413141
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22544212
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22675283
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22806354
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 22937425
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 23002908
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 23068496
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 23133979
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 23199567
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 23265050
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 23330638
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 23396121
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 23461709
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23592780
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 23658263
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 23723851
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:130966, count: 23854817
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:105, count: 23854922
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 23985993
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24117064
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24248135
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24379206
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24510277
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24641348
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24772419
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 24903490
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:130966, count: 25034456
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:105, count: 25034561
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25165632
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25296703
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25427774
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25558845
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25689916
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25820987
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 25952058
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26083129
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26214200
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 26279683
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 26345271
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26476342
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26607413
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26738484
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 26869555
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27000626
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27131697
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 27197180
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 27262768
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27393839
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27524910
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27655981
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27787052
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 27918123
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28049194
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28180265
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 28245748
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 28311336
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28442407
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 28507890
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 28573478
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28704549
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:130966, count: 28835515
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 28966586
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29097657
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29228728
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 29294316
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65588, count: 29359904
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29490975
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:65483, count: 29556458
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29687529
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29818600
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:131071, count: 29949671
15:12:37 [INFO ] [main] c.z.n.c.WriteClient - readCount:50329, count: 30000000
只要向 channel 发送数据时,socket 缓冲可写,这个事件会频繁触发,因此应当只在 socket 缓冲区写不下时再关注可写事件,数据写完之后再取消关注
现在都是多核 cpu,设计时要充分考虑别让 cpu 的力量被白白浪费
前面的代码只有一个选择器,没有充分利用多核 cpu,如何改进呢?
分两组选择器
public class ChannelDemo7 {
public static void main(String[] args) throws IOException {
new BossEventLoop().register();
}
@Slf4j
static class BossEventLoop implements Runnable {
private Selector boss;
private WorkerEventLoop[] workers;
private volatile boolean start = false;
AtomicInteger index = new AtomicInteger();
public void register() throws IOException {
if (!start) {
ServerSocketChannel ssc = ServerSocketChannel.open();
ssc.bind(new InetSocketAddress(8080));
ssc.configureBlocking(false);
boss = Selector.open();
SelectionKey ssckey = ssc.register(boss, 0, null);
ssckey.interestOps(SelectionKey.OP_ACCEPT);
workers = initEventLoops();
new Thread(this, "boss").start();
log.debug("boss start...");
start = true;
}
}
public WorkerEventLoop[] initEventLoops() {
// EventLoop[] eventLoops = new EventLoop[Runtime.getRuntime().availableProcessors()];
WorkerEventLoop[] workerEventLoops = new WorkerEventLoop[2];
for (int i = 0; i < workerEventLoops.length; i++) {
workerEventLoops[i] = new WorkerEventLoop(i);
}
return workerEventLoops;
}
@Override
public void run() {
while (true) {
try {
boss.select();
Iterator<SelectionKey> iter = boss.selectedKeys().iterator();
while (iter.hasNext()) {
SelectionKey key = iter.next();
iter.remove();
if (key.isAcceptable()) {
ServerSocketChannel c = (ServerSocketChannel) key.channel();
SocketChannel sc = c.accept();
sc.configureBlocking(false);
log.debug("{} connected", sc.getRemoteAddress());
workers[index.getAndIncrement() % workers.length].register(sc);
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
@Slf4j
static class WorkerEventLoop implements Runnable {
private Selector worker;
private volatile boolean start = false;
private int index;
private final ConcurrentLinkedQueue<Runnable> tasks = new ConcurrentLinkedQueue<>();
public WorkerEventLoop(int index) {
this.index = index;
}
public void register(SocketChannel sc) throws IOException {
if (!start) {
worker = Selector.open();
new Thread(this, "worker-" + index).start();
start = true;
}
tasks.add(() -> {
try {
SelectionKey sckey = sc.register(worker, 0, null);
sckey.interestOps(SelectionKey.OP_READ);
worker.selectNow();
} catch (IOException e) {
e.printStackTrace();
}
});
worker.wakeup();
}
@Override
public void run() {
while (true) {
try {
worker.select();
Runnable task = tasks.poll();
if (task != null) {
task.run();
}
Set<SelectionKey> keys = worker.selectedKeys();
Iterator<SelectionKey> iter = keys.iterator();
while (iter.hasNext()) {
SelectionKey key = iter.next();
if (key.isReadable()) {
SocketChannel sc = (SocketChannel) key.channel();
ByteBuffer buffer = ByteBuffer.allocate(128);
try {
int read = sc.read(buffer);
if (read == -1) {
key.cancel();
sc.close();
} else {
buffer.flip();
log.debug("{} message:", sc.getRemoteAddress());
debugAll(buffer);
}
} catch (IOException e) {
e.printStackTrace();
key.cancel();
sc.close();
}
}
iter.remove();
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
- Runtime.getRuntime().availableProcessors() 如果工作在 docker 容器下,因为容器不是物理隔离的,会拿到物理 cpu 个数,而不是容器申请时的个数
- 这个问题直到 jdk 10 才修复,使用 jvm 参数 UseContainerSupport 配置, 默认开启
selector.select()与sc.register(selector,…) 执行先后的问题
如果selector的select()方法执行,那么此时所在线程会被阻塞,这个是没有任何疑问的。但是要注意的是这个selector阻塞后,它会影响到其它线程后面使用socketChannel.register(selector,…)将socketChannel注册到此selector的方法(register方法被阻塞)。这个register的调用就得等到这个selector的select()阻塞结束了(比如:某个SelectionKey的某事件触发了),才能正常注册。这也就是说:selector调用select()方法处于阻塞时,此时无法将socketChannel注册到此selector上。
selector.wakeup
selector.wakeup可以唤醒select方法,具体来说:当selector正处于select操作时,调用selector.wakeup可以立即唤醒;如果selector还没有执行select操作,先调用了wakeup,那么下一次selector调用select方法并不会阻塞而是直接返回。有点像LockSupport.unpark。
public class MultiThreadServer {
public static void main(String[] args) throws Exception {
Thread.currentThread().setName("boss-thread");
ServerSocketChannel ssChannel = ServerSocketChannel.open();
ssChannel.configureBlocking(false);
Selector boss = Selector.open();
ssChannel.register(boss, SelectionKey.OP_ACCEPT);
ssChannel.bind(new InetSocketAddress(8080));
Worker worker = new Worker("worker-1");
while (true) {
boss.select();
Iterator<SelectionKey> it = boss.selectedKeys().iterator();
while (it.hasNext()) {
SelectionKey sk = it.next();
it.remove();
if (sk.isAcceptable()) {
SocketChannel socketChannel = ssChannel.accept();
// 注意,这里要配置为非阻塞模式,selector只能配合非阻塞模式
// 否则会报错IllegalBlockingModeException
socketChannel.configureBlocking(false);
// 将socketChannel交给worker
worker.register(socketChannel);
}
}
}
}
static class Worker implements Runnable {
private String name;
private Thread thread;
private Selector selector;
private volatile boolean start = false;
private ConcurrentLinkedQueue<Runnable> queue = new ConcurrentLinkedQueue();
public Worker(String name) {
this.name = name;
}
public void register(SocketChannel socketChannel) throws Exception {
if (!start) {
// 初始化操作
selector = Selector.open();
thread = new Thread(this, name);
thread.start();
start = true;
}
// 为了解决selector.select()方法阻塞时,不能向selector注册socketChannel的问题
// 引入一个队列来解决线程间通信的这个问题
queue.add(() -> {
try {
socketChannel.register(selector, SelectionKey.OP_READ);
} catch (ClosedChannelException e) {
e.printStackTrace();
}
});
// 唤醒selector
// (当selector正处于select操作时,调用selector.wakeup可以立即唤醒;
// 如果selector还没有执行select操作,先调用了wakeup,
// 那么下一次selector调用select方法并不会阻塞而是直接返回。)
selector.wakeup();
}
@Override
public void run() {
while (true) {
try {
selector.select();
//-------------------------------
// 注意:这一段代码不能移动到上面selector.select()这行代码的上面,
// 因为无法保证下面这一段代码 在register方法中 向queue队列中添加任务 的前面执行
// 因此,必须要放在select()方法下面执行, 因为select会阻塞,除非上面wakeup先执行,
// 而如果wakeup先执行了,那么queue中就一定添加了任务,而如果wakeup没有先执行,
Runnable task = queue.poll();
if (task != null) {
task.run();
}
//-------------------------------
Iterator<SelectionKey> it = selector.selectedKeys().iterator();
while (it.hasNext()) {
SelectionKey sk = it.next();
it.remove();
if (sk.isReadable()) {
ByteBuffer buffer = ByteBuffer.allocate(16);
SocketChannel socketChannel = (SocketChannel) sk.channel();
socketChannel.read(buffer);
buffer.flip();
debugAll(buffer);
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
public class TestClient {
public static void main(String[] args) throws IOException {
SocketChannel socketChannel = SocketChannel.open();
socketChannel.connect(new InetSocketAddress(8080));
socketChannel.write(Charset.defaultCharset().encode("0123456789abcdef"));
System.in.read();
}
}
启动服务端和多个客户端,发现没有问题,
(视频还介绍了一个方法,但是觉得会存在问题(经过测试,确实存在问题),就不贴代码了。这个使用队列的没有问题。)
首先启动服务器端
public class UdpServer {
public static void main(String[] args) {
try (DatagramChannel channel = DatagramChannel.open()) {
channel.socket().bind(new InetSocketAddress(9999));
System.out.println("waiting...");
ByteBuffer buffer = ByteBuffer.allocate(32);
channel.receive(buffer);
buffer.flip();
debug(buffer);
} catch (IOException e) {
e.printStackTrace();
}
}
}
输出
waiting...
运行客户端
public class UdpClient {
public static void main(String[] args) {
try (DatagramChannel channel = DatagramChannel.open()) {
ByteBuffer buffer = StandardCharsets.UTF_8.encode("hello");
InetSocketAddress address = new InetSocketAddress("localhost", 9999);
channel.send(buffer, address);
} catch (Exception e) {
e.printStackTrace();
}
}
}
接下来服务器端输出
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 68 65 6c 6c 6f |hello |
+--------+-------------------------------------------------+----------------+
同步阻塞、同步非阻塞、同步多路复用、异步阻塞(没有此情况)、异步非阻塞(其它情况纯属胡说!O(∩_∩)O)
当调用一次 channel.read 或 stream.read 后,会切换至操作系统内核态来完成真正数据读取,而读取又分为两个阶段,分别为:
等待数据阶段
复制数据阶段
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-0yt2jFpk-1690207745260)(assets/0033.png)]
5种io模型
阻塞 IO(在调用read方法的时候,用户线程被阻塞了,在读取期间干不了其它的,直到数据返回)
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ajja7XLx-1690207745261)(assets/0039.png)]
非阻塞 IO(在等待数据阶段,用户线程可以不断询问是否有数据可读,此阶段不处于阻塞状态,而在有数据时,还是会阻塞住,等待复制数据完,再返回)
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-85EA88Ma-1690207745261)(assets/0035.png)]
多路复用(使用一个selector等待数据,当有数据时,触发可读事件通知用户线程,在复制数据时,也还是会阻塞)
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-HgFc0lDY-1690207745262)(assets/0038.png)]
信号驱动
异步 IO
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-qu7QL5kM-1690207745262)(assets/0037.png)]
阻塞 IO vs 多路复用(阻塞io:在做一件事情的时候做不了另外一件事,比如在read的时候,只有等read完了才能处理accept,等到accept了,此时同时channel1再次发送数据过来,阻塞io此时就不能马上处理,而是要等到accept完成了,才能处理channel1发送过来的数据。多路复用:使用select等待事件,当发生多个事件时,select结束等待,使用while循环处理这些个事件)
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ImXSt8Cd-1690207745263)(assets/0034.png)]
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-03WyupmC-1690207745264)(assets/0036.png)]
UNIX 网络编程 - 卷 I
传统的 IO 将一个文件通过 socket 写出
File f = new File("helloword/data.txt");
RandomAccessFile file = new RandomAccessFile(file, "r");
byte[] buf = new byte[(int)f.length()];
file.read(buf);
Socket socket = ...;
socket.getOutputStream().write(buf);
内部工作流程是这样的:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-HtLoImQb-1690207745264)(assets/0024.png)]
java 本身并不具备 IO 读写能力,因此 read 方法调用后,要从 java 程序的用户态切换至内核态,去调用操作系统(Kernel)的读能力,将数据读入内核缓冲区。这期间用户线程阻塞,操作系统使用 DMA(Direct Memory Access)来实现文件读,其间也不会使用 cpu
DMA 也可以理解为硬件单元,用来解放 cpu 完成文件 IO
从内核态切换回用户态,将数据从内核缓冲区读入用户缓冲区(即 byte[] buf),这期间 cpu 会参与拷贝,无法利用 DMA
调用 write 方法,这时将数据从用户缓冲区(byte[] buf)写入 socket 缓冲区,cpu 会参与拷贝
接下来要向网卡写数据,这项能力 java 又不具备,因此又得从用户态切换至内核态,调用操作系统的写能力,使用 DMA 将 socket 缓冲区的数据写入网卡,不会使用 cpu
可以看到中间环节较多,java 的 IO 实际不是物理设备级别的读写,而是缓存的复制,底层的真正读写是操作系统来完成的
通过 DirectByteBuf
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-CFqOY84c-1690207745265)(assets/0025.png)]
大部分步骤与优化前相同,不再赘述。唯有一点:java 可以使用 DirectByteBuf 将堆外内存映射到 jvm 内存中来直接访问使用
进一步优化(底层采用了 linux 2.1 后提供的 sendFile 方法),java 中对应着两个 channel 调用 transferTo/transferFrom 方法拷贝数据
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-dNsgJXLy-1690207745265)(assets/0026.png)]
可以看到
进一步优化(linux 2.4)
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-URNvoiOX-1690207745266)(assets/0027.png)]
整个过程仅只发生了一次用户态与内核态的切换,数据拷贝了 2 次。所谓的【零拷贝】,并不是真正无拷贝,而是在不会拷贝重复数据到 jvm 内存中,零拷贝的优点有
AIO 用来解决数据复制阶段的阻塞问题
异步模型需要底层操作系统(Kernel)提供支持
- Windows 系统通过 IOCP 实现了真正的异步 IO
- Linux 系统异步 IO 在 2.6 版本引入,但其底层实现还是用多路复用模拟了异步 IO,性能没有优势
先来看看 AsynchronousFileChannel
@Slf4j
public class AioFileChannel {
public static void main(String[] args) throws IOException {
try (AsynchronousFileChannel channel = AsynchronousFileChannel.open(Paths.get("data.txt"), StandardOpenOption.READ)) {
ByteBuffer buffer = ByteBuffer.allocate(16);
log.debug("read begin...");
// 参数1 ByteBuffer
// 参数2 读取的起始位置
// 参数3 附件
// 参数4 回调对象 CompletionHandler
channel.read(
buffer,
0,
buffer,
new CompletionHandler<Integer, ByteBuffer>() {
@Override // read 成功
public void completed(Integer result, ByteBuffer attachment) {
log.debug("read completed...{}", result);
attachment.flip();
debugAll(attachment);
}
@Override // read 失败
public void failed(Throwable exc, ByteBuffer attachment) {
exc.printStackTrace();
}
}
);
log.debug("read end...");
} catch (IOException e) {
e.printStackTrace();
}
System.in.read();
}
}
输出
22:00:09 [DEBUG] [main] c.i.n.c.AioFileChannel - read begin...
22:00:09 [DEBUG] [main] c.i.n.c.AioFileChannel - read end...
22:00:09 [DEBUG] [Thread-6] c.i.n.c.AioFileChannel - read completed...15
+--------+-------------------- all ------------------------+----------------+
position: [0], limit: [15]
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 31 32 33 34 35 36 37 38 39 30 61 62 63 0d 0a 00 |1234567890abc...|
+--------+-------------------------------------------------+----------------+
可以看到
默认文件 AIO 使用的线程都是守护线程,所以最后要执行 System.in.read()
以避免守护线程意外结束
public class AioServer {
public static void main(String[] args) throws IOException {
AsynchronousServerSocketChannel ssc = AsynchronousServerSocketChannel.open();
ssc.bind(new InetSocketAddress(8080));
ssc.accept(null, new AcceptHandler(ssc));
System.in.read();
}
private static void closeChannel(AsynchronousSocketChannel sc) {
try {
System.out.printf("[%s] %s close\n", Thread.currentThread().getName(), sc.getRemoteAddress());
sc.close();
} catch (IOException e) {
e.printStackTrace();
}
}
private static class ReadHandler implements CompletionHandler<Integer, ByteBuffer> {
private final AsynchronousSocketChannel sc;
public ReadHandler(AsynchronousSocketChannel sc) {
this.sc = sc;
}
@Override
public void completed(Integer result, ByteBuffer attachment) {
try {
if (result == -1) {
closeChannel(sc);
return;
}
System.out.printf("[%s] %s read\n", Thread.currentThread().getName(),
sc.getRemoteAddress());
attachment.flip();
System.out.println(Charset.defaultCharset().decode(attachment));
attachment.clear();
// 处理完第一个 read 时,需要再次调用 read 方法来处理下一个 read 事件
sc.read(attachment, attachment, this);
} catch (IOException e) {
e.printStackTrace();
}
}
@Override
public void failed(Throwable exc, ByteBuffer attachment) {
closeChannel(sc);
exc.printStackTrace();
}
}
private static class WriteHandler implements CompletionHandler<Integer, ByteBuffer> {
private final AsynchronousSocketChannel sc;
private WriteHandler(AsynchronousSocketChannel sc) {
this.sc = sc;
}
@Override
public void completed(Integer result, ByteBuffer attachment) {
// 如果作为附件的 buffer 还有内容,需要再次 write 写出剩余内容
if (attachment.hasRemaining()) {
sc.write(attachment);
}
}
@Override
public void failed(Throwable exc, ByteBuffer attachment) {
exc.printStackTrace();
closeChannel(sc);
}
}
private static class AcceptHandler implements CompletionHandler<AsynchronousSocketChannel, Object> {
private final AsynchronousServerSocketChannel ssc;
public AcceptHandler(AsynchronousServerSocketChannel ssc) {
this.ssc = ssc;
}
@Override
public void completed(AsynchronousSocketChannel sc, Object attachment) {
try {
System.out.printf("[%s] %s connected\n", Thread.currentThread().getName(), sc.getRemoteAddress());
} catch (IOException e) {
e.printStackTrace();
}
ByteBuffer buffer = ByteBuffer.allocate(16);
// 读事件由 ReadHandler 处理
sc.read(buffer, buffer, new ReadHandler(sc));
// 写事件由 WriteHandler 处理
sc.write(Charset.defaultCharset().encode("server hello!"), ByteBuffer.allocate(16), new WriteHandler(sc));
// 处理完第一个 accpet 时,需要再次调用 accept 方法来处理下一个 accept 事件
ssc.accept(null, this);
}
@Override
public void failed(Throwable exc, Object attachment) {
exc.printStackTrace();
}
}
}