Netty in Action
作者: Norman Maurer / Marvin Allen Wolfthal
出版社: Manning Publications
出版年: 2015-12-31
页数: 296
定价: USD 54.99
装帧: Paperback
ISBN: 9781617291470
不错的学习netty也是学习java网络编程的经典框架。
下载地址:
https://yunpan.cn/crJUkQDa6xEtd 访问密码 f5a0
1.3 Netty’s core components
In this section we’ll discuss Netty’s primary building blocks:
■ Channels
■ Callbacks
■ Futures
■ Events and handlers
These building blocks represent different types of constructs: resources, logic, and
notifications. Your applications will use them to access the network and the data that
flows through it.
For each component, we’ll provide a basic definition and, where appropriate, a
simple code example that illustrates its use.
1.3.1 Channels
A Channel is a basic construct of Java NIO. It represents
an open connection to an entity such as a hardware device, a file, a
network socket, or a program component that is capable of performing
one or more distinct I/O operations, for example reading or writing.9
For now, think of a Channel as a vehicle for incoming (inbound) and outgoing (outbound) data. As such, it can be open or closed, connected or disconnected.
1.3.2 Callbacks
A callback is simply a method, a reference to which has been provided to another
method. This enables the latter to call the former at an appropriate time. Callbacks
are used in a broad range of programming situations and represent one of the most
common ways to notify an interested party that an operation has completed.
Netty uses callbacks internally when handling events; when a callback is triggered
the event can be handled by an implementation of interface ChannelHandler.
1.3.3 Futures
A Future provides another way to notify an application when an operation has completed. This object acts as a placeholder for the result of an asynchronous operation;
it will complete at some point in the future and provide access to the result.
The JDK ships with interface java.util.concurrent.Future, but the provided
implementations allow you only to check manually whether the operation has completed or to block until it does. This is quite cumbersome, so Netty provides its own
implementation, ChannelFuture, for use when an asynchronous operation is executed.
ChannelFuture provides additional methods that allow us to register one or
more ChannelFutureListener instances. The listener’s callback method, operationComplete(), is called when the operation has completed. The listener can then determine whether the operation completed successfully or with an error. If the latter, we
can retrieve the Throwable that was produced. In short, the notification mechanism
provided by the ChannelFutureListener eliminates the need for manually checking
operation completion.
Each of Netty’s outbound I/O operations returns a ChannelFuture; that is, none of
them block. As we said earlier, Netty is asynchronous and event-driven from the
ground up.
1.3.4 Events and handlers
Netty uses distinct events to notify us about changes of state or the status of operations. This allows us to trigger the appropriate action based on the event that has
occurred. Such actions might include
■ Logging
■ Data transformation
■ Flow-control
■ Application logic
Netty is a networking framework, so events are categorized by their relevance to
inbound or outbound data flow. Events that may be triggered by inbound data or an
associated change of state include
■ Active or inactive connections
■ Data reads
■ User events
■ Error events
An outbound event is the result of an operation that will trigger an action in the
future, which may be
■ Opening or closing a connection to a remote peer
■ Writing or flushing data to a socket
Netty’s core components
Every event can be dispatched to a user-implemented method of a handler class. This
is a good example of an event-driven paradigm translating directly into application
building blocks. Figure 1.3 shows how an event can be handled by a chain of such
event handlers.
Netty’s ChannelHandler provides the basic abstraction for handlers like the ones
shown in figure 1.3. We’ll have a lot more to say about ChannelHandler in due course,
but for now you can think of each handler instance as a kind of callback to be executed in response to a specific event.
Netty provides an extensive set of predefined handlers that you can use out of the
box, including handlers for protocols such as HTTP and SSL/TLS. Internally, ChannelHandlers use events and futures themselves, making them consumers of the same
abstractions your applications will employ。
1.3.5 Putting it all together
In this chapter you’ve been introduced to Netty’s approach to high-performance networking and to some of the primary components of its implementation. Let’s assemble a big-picture view of what we’ve discussed.
FUTURES, CALLBACKS, AND HANDLERS
Netty’s asynchronous programming model is built on the concepts of Futures and
callbacks, with the dispatching of events to handler methods happening at a deeper
level. Taken together, these elements provide a processing environment that allows
the logic of your application to evolve independently of any concerns with network
operations. This is a key goal of Netty’s design approach.
Intercepting operations and transforming inbound or outbound data on the fly
requires only that you provide callbacks or utilize the Futures that are returned by
operations. This makes chaining operations easy and efficient and promotes the
writing of reusable, generic code.
SELECTORS, EVENTS, AND EVENT LOOPS
Netty abstracts the Selector away from the application by firing events, eliminating
all the handwritten dispatch code that would otherwise be required. Under the covers, an EventLoop is assigned to each Channel to handle all of the events, including
■ Registration of interesting events
■ Dispatching events to ChannelHandlers
■ Scheduling further actions
The EventLoop itself is driven by only one thread that handles all of the I/O events for
one Channel and does not change during the lifetime of the EventLoop. This simple
and powerful design eliminates any concern you might have about synchronization in
your ChannelHandlers, so you can focus on providing the right logic to be executed
when there is interesting data to process. As we’ll see when we explore Netty’s threading model in detail, the API is simple and compact.
What happens if an exception isn’t caught?
Every Channel has an associated ChannelPipeline, which holds a chain of ChannelHandler instances. By default, a handler will forward the invocation of a handler
method to the next one in the chain. Therefore, if exceptionCaught()is not implemented somewhere along the chain, exceptions received will travel to the end of the
ChannelPipeline and will be logged. For this reason, your application should supply
at least one ChannelHandler that implements exceptionCaught(). (Section 6.4
discusses exception handling in detail.)
@Sharable
Marks this class as one whose instances can be shared among channels
Netty components and design
From a high-level perspective, Netty addresses two corresponding areas of concern, which we might label broadly as technical and architectural. First, its asynchronous and event-driven implementation, built on Java NIO, guarantees maximum application performance and scalability under heavy load. Second, Netty embodies a
set of design patterns that decouple application logic from the network layer, simplifying development while maximizing the testability, modularity, and reusability of code.
3.1 Channel, EventLoop, and ChannelFuture
■ Channel—Sockets
■ EventLoop—Control flow, multithreading, concurrency
■ ChannelFuture—Asynchronous notification
3.1.1 Interface Channel
Basic I/O operations (bind(), connect(), read(), and write()) depend on primitives
supplied by the underlying network transport. In Java-based networking, the fundamental construct is class Socket. Netty’s Channel interface provides an API that greatly
reduces the complexity of working directly with Sockets. Additionally, Channel is the
root of an extensive class hierarchy having many predefined, specialized implementations, of which the following is a short list:
■ EmbeddedChannel
■ LocalServerChannel
■ NioDatagramChannel
■ NioSctpChannel
■ NioSocketChannel
3.1.2 Interface EventLoop
The EventLoop defines Netty’s core abstraction for handling events that occur during
the lifetime of a connection. We’ll discuss EventLoop in detail in chapter 7 in the context of Netty’s thread-handling model. For now, figure 3.1 illustrates at a high level the
relationships among Channels, EventLoops, Threads, and EventLoopGroups.
These relationships are:
■ An EventLoopGroup contains one or more EventLoops.
■ An EventLoop is bound to a single Thread for its lifetime.
■ All I/O events processed by an EventLoop are handled on its dedicated Thread.
■ A Channel is registered for its lifetime with a single EventLoop.
■ A single EventLoop may be assigned to one or more Channels.
Note that this design, in which the I/O for a given Channel is executed by the same
Thread, virtually eliminates the need for synchronization.
3.1.3 Interface ChannelFuture
As we’ve explained, all I/O operations in Netty are asynchronous. Because an operation may not return immediately, we need a way to determine its result at a later time.
For this purpose, Netty provides ChannelFuture, whose addListener() method registers a ChannelFutureListener to be notified when an operation has completed
(whether or not successfully).
MORE ON CHANNELFUTURE Think of a ChannelFuture as a placeholder for the
result of an operation that’s to be executed in the future. When exactly it will
be executed may depend on several factors and thus be impossible to predict
with precision, but it is certain that it will be executed. Furthermore, all operations belonging to the same Channel are guaranteed to be executed in the
order in which they were invoked.
3.2 ChannelHandler and ChannelPipeline
Now we’ll take a more detailed look at the components that manage the flow of data
and execute an application’s processing logic.
3.2.1 Interface ChannelHandler
From the application developer’s standpoint, the primary component of Netty is the
ChannelHandler, which serves as the container for all application logic that applies
to handling inbound and outbound data. This is possible because ChannelHandler
methods are triggered by network events (where the term “event” is used very
broadly). In fact, a ChannelHandler can be dedicated to almost any kind of action,
such as converting data from one format to another or handling exceptions thrown
during processing.
As an example, ChannelInboundHandler is a subinterface you’ll implement frequently. This type receives inbound events and data to be handled by your application’s business logic. You can also flush data from a ChannelInboundHandler when
you’re sending a response to a connected client. The business logic of your application will often reside in one or more ChannelInboundHandlers.
3.2.2 Interface ChannelPipeline
A ChannelPipeline provides a container for a chain of ChannelHandlers and defines
an API for propagating the flow of inbound and outbound events along the chain.
When a Channel is created, it is automatically assigned its own ChannelPipeline.
ChannelHandlers are installed in the ChannelPipeline as follows:
■ A ChannelInitializer implementation is registered with a ServerBootstrap.
■ When ChannelInitializer.initChannel() is called, the ChannelInitializer
installs a custom set of ChannelHandlers in the pipeline.
■ The ChannelInitializer removes itself from the ChannelPipeline.
Let’s go a bit deeper into the symbiotic relationship between ChannelPipeline and
ChannelHandler to examine what happens to data when you send or receive it.
ChannelHandler has been designed specifically to support a broad range of uses,
and you can think of it as a generic container for any code that processes events
(including data) coming and going through the ChannelPipeline.
The movement of an event through the pipeline is the work of the ChannelHandlers
that have been installed during the initialization, or bootstrapping phase of the application. These objects receive events, execute the processing logic for which they have
been implemented, and pass the data to the next handler in the chain. The order in
which they are executed is determined by the order in which they were added. For all
practical purposes, it’s this ordered arrangement of ChannelHandlers that we refer to
as the ChannelPipeline.
3.2.4 Encoders and decoders
3.3 Bootstrapping
Netty’s bootstrap classes provide containers for the configuration of an application’s
network layer, which involves either binding a process to a given port or connecting
one process to another one running on a specified host at a specified port.
Netty’s bootstrap classes provide containers for the configuration of an application’s
network layer, which involves either binding a process to a given port or connecting
one process to another one running on a specified host at a specified port.
Netty’s bootstrap classes provide containers for the configuration of an application’s
network layer, which involves either binding a process to a given port or connecting
one process to another one running on a specified host at a specified port.
Netty’s bootstrap classes provide containers for the configuration of an application’s
network layer, which involves either binding a process to a given port or connecting
one process to another one running on a specified host at a specified port.
Netty’s Channel implementations are thread-safe, so you can store a reference to a
Channel and use it whenever you need to write something to the remote peer, even
when many threads are in use.
Zero-copy
Zero-copy is a feature currently available only with NIO and Epoll transport. It allows
you to quickly and efficiently move data from a file system to the network without
copying from kernel space to user space, which can significantly improve performance in protocols such as FTP or HTTP. This feature is not supported by all OSes.
Specifically it is not usable with file systems that implement data encryption or compression—only the raw content of a file can be transferred. Conversely, transferring
files that have already been encrypted isn’t a problem.