1. The DOM has a user-driven event model based on user interaction, with a set of interface elements arranged in a tree structure (HTML, XML, etc.). When a user interacts with a particular part of the interface, there is an event and a context, which is the HTML/XML element on which the click or other activity took place. That context has a parent and potentially children. Because the context is within a tree, the model includes the concepts of bubbling and capturing, which allow elements either up or down the tree to receive the event that was called.
2. Node created the EventEmitter class to provide some basic event functionality. All event functionality in Node revolves around EventEmitter because it is also designed to be an interface class for other classes to extend. It would be unusual to call an EventEmitter instance directly. EventEmitter has a handful of methods, the main two being on and emit. The class provides these methods for use by other classes. The on method creates an event listener for an event. The on method takes two parameters: the name of the event to listen for and the function to call when that event is emitted.
3. Creating a new class that supports events with EventEmitter:
var util = require('util'), EventEmitter = require('events').EventEmitter; var Server = function() { console.log('init'); }; util.inherits(Server, EventEmitter); var s = new Server(); s.on('abc', function() { console.log('abc'); });
4. Firing the event listener is as simple as calling the emit method. It’s important to note that these events are instance-based. There are no global events. When calling emit, in addition to the event name, you can also pass an arbitrary list of parameters. These will be passed to the function listening to the event.
5. When emit() is called with arguments, the code below is used to call each event listener.
if (arguments.length <= 3) { // fast case handler.call(this, arguments[1], arguments[2]); } else { // slower var args = Array.prototype.slice.call(arguments, 1); handler.apply(this, args); }
If emit() is passed with three or fewer arguments, the method takes a shortcut and uses call. Otherwise, it uses the slower apply to pass all the arguments as an array. The important thing to recognize here is that Node makes both of these calls using the this argument directly. This means that the context in which the event listeners are called is the context of EventEmitter—not their(callbacks’) original context.
6. The http.createServer() method provides us with a new instance of the HTTP Server class, which is the class we use to define the actions taken when Node receives incoming HTTP requests.
7. The http server class has six events and three methods. The other thing to notice is how most of the methods are used to initialize the server, whereas events are used during its operation.
8. The connection and close events indicate the buildup or teardown of a TCP connection to a client. It’s important to remember that some clients will be using HTTP 1.1, which supports keepalive. This means that their TCP connections may remain open across multiple HTTP requests. When a new TCP stream is created for a request, a connection event is emitted. This event passes the TCP stream for the request as a parameter. The stream is also available as a request.connection variable for each request that happens through it. However, only one connection event will be emitted for each stream. This means that many requests can happen from a client with only one connection event.
9. The request, checkContinue, upgrade, and clientError events are associated with HTTP requests. The checkContinue event allows you to take more direct control of an HTTP request in which the client streams chunks of data to the server. As the client sends data to the server, it will check whether it can continue, at which point this event will fire. If an event handler is created for this event, the request event will not be emitted. The upgrade event is emitted when a client asks for a protocol upgrade. The http server will deny HTTP upgrade requests unless there is an event handler for this event. The clientError event passes on any error events sent by the client.
10. You can use the same http module when doing HTTP requests, but should use the http.ClientRequest class. There are two factory methods for this class: a general-purpose one and a convenience method.
11. Creating an HTTP request:
var http = require('http'); var opts = { host: 'www.google.com' port: 80, path: '/', method: 'GET' }; var req = http.request(opts, function(res) { console.log(res); res.on('data', function(data) { console.log(data); }); }); req.end();
12. The factory method http.request() takes an options object and an optional callback argument. The passed callback listens to the response event, and when a response event is received, we can process the results of the request. Until we call the end() method, request won’t initiate the HTTP request, because it doesn’t know whether it should still be waiting for us to send data.
13. http.get() does exactly the same thing as http.request(), but it’s slightly more concise. We can omit the method attribute of the config object, and leave out the call request.end() because it’s implied.
14. We can get chunks of data in Buffers from response. You can specify an encoding with the response.setEncoding() method, then each of the Buffers will be converted to a string.
15. Writing data to an upstream service
var options = { host: 'www.example.com', port: 80, path: '/submit', method: 'POST' }; var req = http.request(options, function(res) { res.setEncoding('utf8'); res.on('data', function (chunk) { console.log('BODY: ' + chunk); }); }); req.write("my data"); req.write("more of my data"); req.end();
16. http.ClientRequest.write() method allows you to send data upstream, and it requires you to explicitly call http.ClientRequest.end() to indicate you’re finished sending data. Whenever ClientRequest.write() is called, the data is sent upstream (it isn’t buffered), but the server will not respond until ClientRequest.end() is called.
17. You can stream data to a server using ClientRequest.write() by coupling the writes to the data event of an readable Stream. This is ideal if you need to send a file from disk to a remote server over HTTP.
18. The ClientResponse object stores a variety of information about the response. In general, it is pretty intuitive. Some of its obvious properties that are often useful include statusCode(which contains the HTTP status) and header (which is the response header object). Also hung off of ClientResponse are various streams and properties that you may or may not want to interact with directly.
19. The URL module provides tools for easily parsing and dealing with URL strings. It’s extremely useful when you have to deal with URLs. The module offers three methods: parse, format, and resolve.
20.Parsing a URL using the URL module
> var URL = require('url'); > var myUrl = "http://www.nodejs.org/some/url/?with=query¶m=that&are=awesome #alsoahash"; > myUrl 'http://www.nodejs.org/some/url/?with=query¶m=that&are=awesome#alsoahash' > parsedUrl = URL.parse(myUrl); { href: 'http://www.nodejs.org/some/url/?with=query¶m=that&are=awesome#alsoahash' , protocol: 'http:' , slashes: true , host: 'www.nodejs.org' , hostname: 'www.nodejs.org' , hash: '#alsoahash' , search: '?with=query¶m=that&are=awesome' , query: 'with=query¶m=that&are=awesome' , pathname: '/some/url/' } > parsedUrl = URL.parse(myUrl, true); { href: 'http://www.nodejs.org/some/url/?with=query¶m=that&are=awesome#alsoahash' , protocol: 'http:' , slashes: true , host: 'www.nodejs.org' , hostname: 'www.nodejs.org' , hash: '#alsoahash' , search: '?with=query¶m=that&are=awesome' , query: { with: 'query' , param: 'that' , are: 'awesome' }, pathname: '/some/url/' }
We just call the parse method from the URL module on the string. It returns a data structure representing the parts of the parsed URL. The components it produces are: href, protocol, host, auth, hostname, port, pathname, search, query, hash. The href is the full URL that was originally passed to parse. The protocol is the protocol used in the URL. host is the fully qualified hostname of the URL. This could be as simple as the hostname for a local server or a fully qualified domain name such as www.google.com. It might also include a port number. URL with credentials like un:[email protected]:8080 is broken down further into auth, containing just the user credentials; port, containing just the port; and hostname, containing the hostname portion of the URL. The URL module doesn’t have the capability to split the hostname down into its components, such as domain or TLD (Top Level Domain, such as .com, .net). The pathname is the entire filepath after the host. In http://sports.yahoo.com/nhl, it is /nhl. The next component is the search component, which stores the HTTP GET parameters in the URL. For example, if the URL were http://mydomain.com/?foo=bar&baz=qux, the search component would be ?foo=bar&baz=qux. The query component contains one of two things, depending on how parse was called. parse takes two arguments: the url string and an optional Boolean that determines whether the query string should be parsed using the querystring module. If the second argument is false, query will just contain a string similar to that of search but without the leading ?. If you don’t pass anything for the second argument, it defaults to false. The hash component contains the part of the URL after the #. Again, note the # is included in the string.
21. The querystring module is a very simple helper module to deal with query strings. The querystring module provides an easy way to create objects from the query strings. The main methods it offers are parse and decode, but some internal helper functions, —such as escape, unescape, unescapeBuffer, encode, and stringify, are also exposed. When you get content from an HTTP POST that is x-form-encoded, it will also be in query string form. All the browser manufacturers have standardized around this approach. By default, forms in HTML will send data to the server in this way also.
22. querystring.encode function takes a query string’s key-value pair object and stringifies it. Ideally you should use an object that has only the data that you want in it because the encode method will add all properties of the object. However, if the property value isn’t a string, Boolean, or number, it won’t be serialized and the key will just be included with an empty value.
23. The stream API provides an abstract interface which contains common methods and properties that are available in specific implementations of streams. Streams can be readable, writable, or both. All streams are EventEmitter instances, allowing them to emit events.
24. readable streams are about emitting data events. These events represent the stream of data as a stream of events. To make this manageable, streams have a number of features that allow you to configure how much data you get and how fast. In real use cases, you might either stream the data somewhere else or spool it into bigger pieces before you work on it. In essence, the data event simply provides access to the data, and you have to figure out what to do with each chunk.
25.Creating a readable file stream
var fs = require('fs'); var filehandle = fs.readFile('data.txt', function(err, data) { console.log(data) });
26. The spooling pattern is used when we need an entire resource available before we deal with it. We know it’s important not to block the event loop for Node to perform well, so even though we don’t want to perform the next action on this data until we’ve received all of it, we don’t want to block the event loop. In this scenario, we use a stream to get the data, but use the data only when enough is available. Typically this means when the stream ends, but it could be another event (end event) or condition.
27. The filesystem module mimics the POSIX style of file I/O. It is a somewhat unique module in that all of the methods have both asynchronous and synchronous versions.
28.Reading and deleting a file asynchronously using nested callbacks:
var fs = require('fs'); fs.readFile('warandpeace.txt', function(e, data) { console.log('War and Peace: ' + data); fs.unlink('warandpeace.txt'); });
29. Node introduces the Buffer class to let you work with binary data. Buffers are an extension to the V8 engine, which means that they have their own set of pitfalls. Buffers are actually a direct allocation of memory. They provides direct memory access, warts and all. Once a Buffer is created, it is a fixed size. If you want to add more data, you must clone the Buffer into a larger Buffer. Although some of these features may seem frustrating, they allow Buffer to perform at the speed necessary for many data operations on the server. It was a conscious design choice to trade off some programmer convenience for performance.
30. In JavaScript, you can create a number from a hex value using the notation 0x in front of the hex value.
31. Once you copy things to a Buffer, they will be stored as their binary representations. You can always convert the binary representation in the buffer back into other things, such as strings, later. So a Buffer is defined only by its size, not by the encoding or any other indication of its meaning.
32. Each character in UTF-8 is represented by at least 1 byte, but sometimes up to 4. Essentially, the first 128 values are good old ASCII, whereas the others are pushed further down in the map and represented by higher numbers. So to be safe, you should define a Buffer to be four times the size of the largest input you can accept, measured in UTF characters.
33. Buffers can be created using three kinds of parameters: the length of the Buffer in bytes, an array of bytes to copy into the Buffer, or a string to copy into the Buffer.
34. When we create a Buffer with length, we get a matching number of bytes. However, because the Buffer is just getting an allocation of memory directly, it is uninitialized and the contents are left over from whatever happened to occupy them before. This is unlike all the native JavaScript types, which initialize all memory so that when you create a new primitive or object, it doesn’t assign whatever was already in the memory space to the primitive or object you just created.
35. When you are building really highly scalable apps, it’s often worth using Buffers to hold strings. This is especially true if you are just shunting the strings around the application without modifying them. Therefore, even though strings exist as primitives in JavaScript, it’s still very common to keep strings in Buffers in Node.
36. When we create a Buffer with a string, it defaults to UTF-8. You can pass in the encoding as the second parameter.
37. Buffer.byteLength() method takes a string and an encoding and returns the string’s length in bytes, rather than in characters as String.length does.
38. The Buffer.write() method writes a string to a specific index of a Buffer. If there is room in the Buffer starting from the specified offset, the entire string will be written. Otherwise, characters are truncated from the end of the string to fit the Buffer. In either case, Buffer.write() will return the number of bytes that were written. In the case of UTF-8 strings, if a whole character can’t be written to the Buffer, none of the bytes for that character will be written. If possible, when writing UTF-8, Buffer.write() will terminate the character string with a NUL character (binary 0).