Content delivery network

 From Wikipedia, the free encyclopedia

content delivery network or content distribution network (CDN) is a system of computers containing copies of data placed at various nodes of a network. When properly designed and implemented, a CDN can improve access to the data it caches by increasing access bandwidth and redundancy and reducing access latency. Data content types often cached in CDNs include web objects, downloadable objects (media files, software, documents), applications, live streaming media, and database queries.

Contents

   [hide] 
  • 1 CDN benefits
  • 2 ASP versus on-net
  • 3 Technology
  • 4 Content networking techniques
    • 4.1 Content service protocols
    • 4.2 Peer-to-peer CDNs
  • 5 CDN Trends
    • 5.1 Emergence of Telco CDNs
      • 5.1.1 Telco CDN Advantages
    • 5.2 Federated CDNs
    • 5.3 EDNS-Client-Subnet Standard
  • 6 Notable content delivery service providers
    • 6.1 Free CDNs
    • 6.2 Traditional Commercial CDNs
    • 6.3 Telco CDNs
    • 6.4 Commercial CDNs using P2P for delivery
  • 7 See also
  • 8 References
  • 9 Further reading

[edit]CDN benefits

The capacity sum of strategically placed servers can be higher than the network backbone capacity. This can result in an impressive increase in the number of concurrent users. For instance, when there is a 10 Gbit/s network backbone and 200 Gbit/s central server capacity, only 10 Gbit/s can be delivered. But when 10 servers are moved to 10 edge locations, total capacity can be 10×10 Gbit/s.

Strategically placed edge servers decrease the load on interconnects, public peers, private peers and backbones, freeing up capacity and lowering delivery costs. It uses the same principle as above. Instead of loading all traffic on a backbone or peer link, a CDN can offload these by redirecting traffic to edge servers.

CDNs generally deliver content over TCP and UDP connections. TCP throughput over a network is affected by both latency and packet loss. In order to reduce both of these parameters, CDNs traditionally place servers as close to the edge networks that users are on as possible. Theoretically the closer the content the faster the delivery, although network distance may not be the factor that leads to best performance. End users will likely experience less jitter, fewer network peaks and surges, and improvedstream quality—especially in remote areas. The increased reliability allows a CDN operator to deliver HD quality content with high Quality of Service, low costs and low network load. Some providers also utilize TCP acceleration technology to further boost CDN’s performance and end-user experiences.

CDNs can dynamically distribute assets to strategically placed redundant core, fallback and edge servers. CDNs can have automatic server availability sensing with instant user redirection. A CDN can offer 100% availability, even with large power, network or hardware outages.

CDN technologies give more control of asset delivery and network load. They can optimize capacity per customer, provide views of real-time load and statistics, reveal which assets are popular, show active regions and report exact viewing details to the customers. These usage details are an important feature that a CDN provider must provide, since the usage logs are no longer available at the content source server after it has been plugged into the CDN, because the connections of end-users are now served by the CDN edges instead of the content source.

[edit]ASP versus on-net

Most CDNs are operated as an application service provider (ASP) on the Internet, although an increasing number of internet network owners, such as AT&T and Level3, have built their own CDN to improve on-net content delivery and to generate revenues from content customers. Some develop internal CDN software; others use commercially available software.

[edit]Technology

CDN nodes are usually deployed in multiple locations, often over multiple backbones. These nodes cooperate with each other to satisfy requests for content by end users, transparently moving content to optimize the delivery process. Optimization can take the form of reducing bandwidth costs, improving end-user performance (reducing page load times and improving user experience), or increasing global availability of content.

The number of nodes and servers making up a CDN varies, depending on the architecture, some reaching thousands of nodes with tens of thousands of servers on many remotePoPs. Others build a global network and have a small number of geographical PoPs.

Requests for content are typically algorithmically directed to nodes that are optimal in some way. When optimizing for performance, locations that are best for serving content to the user may be chosen. This may be measured by choosing locations that are the fewest hops, the fewest number of network seconds away from the requesting client, or the highest availability in terms of server performance (both current and historical), so as to optimize delivery across local networks. When optimizing for cost, locations that are least expensive may be chosen instead.

In an optimal scenario, these two goals tend to align, as servers that are close to the end user at the edge of the network may have an advantage in performance or cost. The Edge Network is grown outward from the origin/s by further acquiring (via purchase, peering, or exchange) co-locations facilities, bandwidth and servers.

[edit]Content networking techniques

The Internet was designed according to the end-to-end principle.[1] This principle keeps the core network relatively simple and moves the intelligence as much as possible to the network end-points: the hosts and clients. As a result the core network is specialized, simplified, and optimized to only forward data packets.

Content Delivery Networks augment the end-to-end transport network by distributing on it a variety of intelligent applications employing techniques designed to optimize content delivery. The resulting tightly integrated overlay uses web caching, server-load balancing, request routing, and content services.[2] These techniques are briefly described below.

Web caches store popular content on servers that have the greatest demand for the content requested. These shared network appliances reduce bandwidth requirements, reduce server load, and improve the client response times for content stored in the cache.

Server-load balancing uses one or more techniques including service based (global load balancing) or hardware based, i.e. layer 4–7 switches, also known as a web switch, content switch, or multilayer switch to share traffic among a number of servers or web caches. Here the switch is assigned a single virtual IP address. Traffic arriving at the switch is then directed to one of the real web servers attached to the switch. This has the advantage of balancing load, increasing total capacity, improving scalability, and providing increased reliability by redistributing the load of a failed web server and providing server health checks.

A content cluster or service node can be formed using a layer 4–7 switch to balance load across a number of servers or a number of web caches within the network.

Request routing directs client requests to the content source best able to serve the request. This may involve directing a client request to the service node that is closest to the client, or to the one with the most capacity. A variety of algorithms are used to route the request. These include Global Server Load Balancing, DNS-based request routing, Dynamic metafile generation, HTML rewriting,[3] and anycasting.[4] Proximity—choosing the closest service node—is estimated using a variety of techniques including reactive probing, proactive probing, and connection monitoring.

CDNs use a variety of methods of content delivery including, but not limited to, manual asset copying, active web caches, and global hardware load balancers.

[edit]Content service protocols

Several protocol suites are designed to provide access to a wide variety of content services distributed throughout a content network. The Internet Content Adaptation Protocol (ICAP) was developed in the late 1990s[5][6] to provide an open standard for connecting application servers. A more recently defined and robust solution is provided by the Open Pluggable Edge Services (OPES) protocol.[7] This architecture defines OPES service applications that can reside on the OPES processor itself or be executed remotely on a Callout Server. Edge Side Includes or ESI is a small markup language for edge level dynamic web content assembly. It is fairly common for websites to have generated content. It could be because of changing content like catalogs or forums, or because of personalization. This creates a problem for caching systems. To overcome this problem a group of companies created ESI.

[edit]Peer-to-peer CDNs

Although peer-to-peer (P2P) is not traditional CDN technology, it is increasingly used to deliver content to end users. P2P claims low cost and efficient distribution. Even though P2P actually generates more traffic than traditional client-server CDNs for the edge provider (because a peer also uploads data instead of just downloading it) it's welcomed by parties running content delivery/distribution services. The real strength of P2P shows when one has to distribute data in high demand, like the latest episode of a television show or some sort of software patch/update in short period of time. One of the advantages of this is that the more people who download the (same) data, the more efficient P2P is for the provider, slashing the cost of the transit fees that a CDN provider has to pay to their upstream IP transit providers.

On the other hand, the “long tail” type material does not benefit much from P2P delivery schema, since, to gain advantage over traditional distribution models, a P2P-enabled CDN must force storing (caching) data on peers—something that is usually not desired by users and which is rarely enabled.

Contrary to popular belief, P2P is not limited to low-bandwidth audio-video signal distribution. There is no technical boundary, built-in inefficiency, or flaw-by-design in peer-to-peer technology to prevent distribution of full HD audio+video signal at, for example, 8 Mbit/s. It's just environmental factors, like low (upload) bandwidth or inadequate computing power in CE devices, that prevent HD material being publicly available in P2P CDNs. (Low bandwidth problems also apply to traditional CDN, though.)

There are some concerns about lack of Quality of Service control over P2P distribution, but these are being addressed by the P2P-Next consortium. Other concerns include security (e.g. modification of content to include malware) and DRM.

[edit]CDN Trends

[edit]Emergence of Telco CDNs

The rapid growth of streaming video traffic[8] has necessitated large capital expenditures by broadband providers[9] in order to meet this demand and to retain subscribers by delivering a sufficiently good quality of experience.

To address this, Telecommunications service providers (TSPs) have begun to launch their own content delivery networks[10] as a means to lessen the demands on the network backbone and to reduce infrastructure investments.

[edit]Telco CDN Advantages

Because they own the networks over which video content is transmitted, Telco CDNs have advantages over traditional CDNs.

They own the last mile and can deliver content closer to the end user because it can be cached deep in their networks.[11] This deep caching minimizes the distance that video data travels over the general Internet and delivers it more quickly and reliably.

Telco CDNs also have a built-in cost advantage since traditional CDNs must lease bandwidth from them and build the operator’s margin into their own cost structures.

[edit]Federated CDNs

In June of 2011, StreamingMedia.com reported that a group of TSPs had founded an Operator Carrier Exchange (OCX)[12] to interconnect their networks and compete more directly against large traditional CDNs like Akamai and Limelight Networks, which have extensive points of presence (POPs) worldwide. This way, Telcos are building a Federated CDN offer, much more interesting for a content provider willing to deliver its content to the agregated audience of this federation.

It is likely that in a near future, other Telco CDN federations will be created. They will grow by enrolment of new Telco joining the federation and bringing network presence and Internet subscribers base to the existing ones.

[edit]EDNS-Client-Subnet Standard

In August 2011, a global consortium of leading Internet service providers led by Google announced their official implementation of IETF’s edns-client-subnet standard which is intended to accurately localize DNS resolution responses. The initiative involves a limited number of leading DNS and CDN service providers. Under the edns-client-subnet standard, the recursive DNS servers of CDNs will utilize the IP address of the original client when resolving DNS requests.[13] Traditional CDNs rely on the IP address of the DNS resolver instead of that of the client when resolving DNS requests, which can pose latency problem if the DNS resolver of the client’s ISP is far from the location of the client.

你可能感兴趣的:(职场,NetWork,content,休闲,delivery)