参考出处:https://blog.csdn.net/weixin_43502661/article/details/89228324
MEC的实现基于虚拟平台,基于NFV、ICN、SDN技术的最新技术
缩写 | 详细 | 解释 |
---|---|---|
NFV | 网络功能虚拟化network functions virtualization | 通过在边缘设备中创建虚拟机给移动设备提供计算服务,这样可以在边缘设备中同时运行不同的任务 |
ICN | 信息为中心网络information-centric networks | 为MEC提供端对端的服务识别模式,为实现内容感知计算,从以主机为中心切换到以信息为中心 |
SDN | 软件定义网络software-defined networks | 允许MEC管理者通过功能抽象来管理服务,实可伸缩和动态计算 |
优点
MEC has the advantages of
定义 : 不能分割的高度集成的或者相对简单的任务和必须作为一个整体被执行(在移动设备上或者卸载到MEC服务器上执行)的任务都可以叫做Binary Offloading
详细:
特别的,This model can also be generalized to handle the soft deadline requirement which allows a small portion of tasks to be completed after τ d \tau_d τd
适用场景: the task-input bits are bit-wiseindependent and can be arbitrarily divided into different groups and executed by different entities in MEC systems
考虑依赖性:程序或者组件之间的依赖性是无法忽略的,这要求该种模型能够捕捉不同功能间的依赖性以及使这些功能正常运行。
表示模型:
G ( V , E ) G ( V , E ) G(V,E)
the set of vertices(定点集) V : V: V: different procedures in the application
the set of edges (边界集) E : E: E: call dependencies
三种典型的依赖模型
图片描述
node 1 and node N in Fig. 4( a)–4( c) are components that must be executed locally,with the reason that represent the step of collecting the I/O data and displaying the computation results ,respectively.
c : c: c:the required computation workloads and resources of each procedure can also be specified in the vertices
w : w: w:the amount of input/output data of each procedure can be characterized by imposing weights on the edges
设计焦点是高效的空中接口 Reducing communication latency by designing a highly efficient air interface is the main design focus.
通信双方:In MEC systems, communications are typically between APs and mobile devices with the possibility of direct D2D communications.
在MEC系统中,终端设备不能直接跟MEC服务器进行通信(由于缺乏无线接口),但是可以通过与APs(包括:BS、公共WiFi路由)进行D2D通信,无线APs不仅仅能为MEC服务器提供无线接口,还能通过回程链路接入远程数据中心,能够更进一步帮助MEC服务器卸载计算任务到其他的MEC服务器和大规模云数据中心。另外,D2D还能实现一簇移动设备之间的对等资源共享和计算载荷均衡。
MEC系统中可能用到的关键无线通信技术:
WiFi and LTE (or 5G) are two primary technologies enabling the access to MEC systems
(1) t m = L X f m t_m =\frac{LX}{f_m} \tag{1} tm=fmLX(1)
f m is bounded by a maximum value, f C P U m a x f_{CPU}^{max} fCPUmax
考虑服务时延:由于边缘服务器的计算资源相对小,在设计MEC系统时必须考虑服务延迟时间。
时延敏感的应用的时延
consider the exact server-computation latency for latency-sensitive applications
设定MEC服务器为不同移动设备分配不同的VM
Specifically, assume the MEC server allocates different VMs for different mobile devices,allowing independent computation
服务执行时间:
server execution time denoted by t s , k t _{s,k} ts,k
t s , k = w k f s , k t_{s,k}=\frac{w_k}{f_{s,k}} ts,k=fs,kwk
w k w_k wk: the number of required CPU cycles for processing the offloaded computation workload
f s , k f_{s,k} fs,k :denote the allocated servers’ CPU-cycle frequency for mobile device k
表示移动设备k所分配到的CPU频率
注意
the server scheduling queuing delay should be accounted
总的服务器的计算时延(包括队列时延)
Without loss of generality, denote k k k as the processing order for a mobile device and name it as mobile k k k.
the total server-computation latency including the queuing delay for device k
denoted by T s , k T_{s,k} Ts,k
(3) T s , k = ∑ i ≤ k t s , i T_{s,k}=\sum_{i≤k}t_{s,i} \tag{3} Ts,k=i≤k∑ts,i(3)
非时延敏感的应用
对于非时延敏感的应用,平均的服务器计算时间基于随机模型。例如,the task arrivals and service time are modeled by the Poisson and exponential processes,respectively.
多VM所致的计算时延
multiple VMs sharing the same physical machine will introduce the I/O interference among different VMs,denoted by T s , k ′ T_{s,k}' Ts,k′
T s , k ′ = T s , k ( 1 + ϵ ) n T'_{s,k}=T_{s,k} (1+\epsilon)^n Ts,k′=Ts,k(1+ϵ)n
ϵ \epsilon ϵ: the performance degradation factor as the percentage increasing of the latency 作为增加时延的可能性的性能恶化因子
based on the DVFS( dynamic frequency and voltage scaling) technique
Consider an MEC server that handles K computation tasks and the k-th task is allocated with w k w_k wk CPU cycles with CPU-cycle frequency f s , k f_{s,k } fs,k.
Hence, the total energy consumed by the CPU at the MEC server,denoted by E s E_s Es
(4) E s = ∑ k = 1 K κ w k f s , k 2 E_s=\sum ^K_{k=1}\kappa w_kf^2_{s,k} \tag{4} Es=k=1∑Kκwkfs,k2(4)
under CPU utilization ratio
[89] X. Fan, W.-D. Weber, and L. A. Barroso, “Power provisioning for
a warehouse-sized computer,” in Proc. 34th ACM Annu. Int. Symp.
Comput. Archit. (ISCA), San Diego, CA, USA, Jun. 2007, pp. 13–23.
[90] C.-C. Lin, P. Liu, and J.-J. Wu, “Energy-efficient virtual machine pro-
vision algorithms for cloud systems,” in Proc. IEEE Utility Cloud
Comput. (UCC), Melbourne, VIC, Australia, Dec. 2011, pp. 81–88.
(5) E s = α E m a x + ( 1 − α ) E m a x u E_s=\alpha E_{max}+(1−\alpha)E_{max}u \tag{5} Es=αEmax+(1−α)Emaxu(5)
E m a x E_{max} Emax is the energy consumption for a fully-utilized server
α \alpha α is the fraction of the idle energy consumption(空闲的能量消耗的占比)
u u u denotes the CPU utilization ratio
模型的启示
This model suggests that energy-efficient MEC should allow servers to be switched into the sleep mode in the case of light load and consolidation of computation loads into fewer active servers.(处理轻量任务和将任务合并到任务更少的服务器中处理时,切换到睡眠模式)
Classification of resource management techniques for MEC
决策模型一:Deterministic Task Model With Binary Offloading
一个特定任务或者本地计算任务什么时候应该卸载到边缘服务器Consider the mentioned single-user MEC system where the binary offloading decision is on whether a particular task should be offloaded for edge execution or local computation
从能耗的角度来说
这个问题的研究可以追溯到传统云计算系统的研究上, where the communication links were typically assumed to have a fixed rate B B B
卸载任务到云端能改善时延性能的情况
Offloading the computation to the cloud server can improve the latency performance only when
(6) w f m > d B + w f s \frac{w}{f_m}>\frac {d}{B}+\frac{w}{f_s} \tag{6} fmw>Bd+fsw(6)
denote:
w w w : the amount of computation (in CPU cycles) for a task
f m f_m fm : the CPU speed of the mobile device
f s f_s fs : the CPU speed at the cloud server
d d d : the input data size
推广到卸载任务到边缘服务器的情况
卸载任务到边缘服务器能节能的情况
Offloading the task could help save mobile energy when
(7) p m × w f m > p t × d B + p i × w f s p_m×\frac{w}{f_m}>p_t ×\frac {d}{B}+p_i×\frac{w}{f_s} \tag{7} pm×fmw>pt×Bd+pi×fsw(7)
denote:
p m p_m pm : the CPU power consumption at the mobile device
p t p_t pt : the transmission power
p i p_i pi : the power consumption at the device when the task is running at the server任务在边缘设备中运行时的能耗
从无线通信的角度来说
能耗决定卸载决策
The offloading decision was determined by the computation mode (either offloading or local computing) that incurs less energy consumption
CPU-cycle应该自适应发射功率
the optimal CPU-cycle frequencies for local computing and time division for offloading should be adaptive to the transferred power
在无线通信中,数据速率有时变性。
the data rates for wireless communications are not constant and change with the time-varying channel gains as well as depend on the transmission power. This calls for the design of control policies for power adaptation and data scheduling to streamline the offloading process.
数据速率变化,一方面因为时变信道,另一方面因为传输能量。这就需要电源自适应的控制和数据调度策略的设计以使卸载过程更加高效。
香农公式
揭示能量与速率的关系。 task offloading is desirable when the channel power gain is greater than a threshold and the server CPU is fast enough。
在使用无线通信的系统优化能耗这个问题上,有学者进行了更加深入的研究
W. Zhang et al., “Energy-optimal mobile cloud computing under stochastic wireless channel,” IEEE Trans. Wireless Commun., vol. 12, no. 9, pp. 4569–4581, Sep. 2013.
本地执行的电源优化
DVFS technique
凸优化问题a convex optimization problem
在同一计算时长上CPU频率最优的最优解由KKT条件得到。The optimal CPU-cycle frequencies over the computation duration were derived in closed form by solving the Karush-Kuhn-Tucker (KKT) conditions, suggesting that the processor should speed up as the number of completed CPU cycles increases
数据调度
通过DP技术得到最优数传调度和得到期限问题上的最小能耗的缩放比例原则
Under the Gilbert-Elliott channel model,the optimal data transmission scheduling was obtained through dynamic programming (DP) techniques, and the scaling law of the minimum expected energy consumption with respect to the execution deadline was also derived.
相对复杂的移动应用可以分为一系列的子任务。
下面的文献讲述了部分卸载进一步优化MEC的性能。(从这里开始到这部分结束的最后一篇)
Y. Wang, M. Sheng, X. Wang, L. Wang, and J. Li, “Mobile-edge computing: Partial computation offloading using dynamic voltage scaling,”IEEE Trans. Commun., vol. 64, no. 10, pp. 4268–4282, Oct. 2016.
下面文献中使用 task-call graphs以明确不同子任务间的依赖关系,代码分区策略被用以动态产任务卸载的优化集。(从这里开始到这部分结束的最后一篇)
M. Jia, J. Cao, and L. Yang, “Heuristic offloading of concurrent tasks for computation-intensive applications in mobile cloud computing,” in Proc. IEEE Int. Conf. Comput. Commun. (INFOCOM WKSHPS), Toronto, ON, Canada, Apr./May 2014, pp. 352–357.
Y.-H. Kao, B. Krishnamachari, M.-R. Ra, and F. Bai, “Hermes: Latency optimal task assignment for resource-constrained mobile computing,” in Proc. IEEE Int. Conf. Comput. Commun. (INFOCOM), Hong Kong, Apr./May 2015, pp. 1894–1902.
S. E. Mahmoodi, R. N. Uma, and K. P. Subbalakshmi, “Optimal joint scheduling and cloud offloading for mobile applications,” IEEE Trans. Cloud Comput., to be published.
W. Zhang, Y. Wen, and D. O. Wu, “Collaborative task execution in mobile cloud computing under a stochastic wireless channel,” IEEE Trans. Wireless Commun., vol. 14, no. 1, pp. 81–93, Jan. 2015.
S. Khalili and O. Simeone, “Inter-layer per-mobile optimization of cloud mobile computing: A message-passing approach,” Trans. Emerg. Telecommun. Technol., vol. 27, no. 6, pp. 814–827, Jun. 2016.
P. D. Lorenzo, S. Barbarossa, and S. Sardellitti, “Joint optimization of radio resources and code partitioning in mobile edge computing.”[Online]. Available: http://arxiv.org/abs/1307.3835v3
S. E. Mahmoodi, K. P. Subbalakshmi, and V. Sagar, “Cloud offloading for multi-radio enabled mobile devices,” in Proc. IEEE Int. Conf.Commun. (ICC), London, U.K., Jun. 2015, pp. 5473–5478.
特征:任务到达具有随机性,到达但没被处理的任务会加入到任务缓冲队列中
这样的系统的长期性能与决策任务相比更具相关性,而且优化系统操作的时间相关性使得系统设计非常具有挑战性。
D. Huang, P. Wang, and D. Niyato, “A dynamic offloading algorithm for mobile computing,” IEEE Trans. Wireless Commun., vol. 11, no. 6, pp. 1991–1995, Jun. 2012.
J. Liu, Y. Mao, J. Zhang, and K. B. Letaief, “Delay-optimal computation task scheduling for mobile-edge computing systems,” in Proc.IEEE Int. Symp. Inf. Theory (ISIT), Barcelona, Spain, Jul. 2016,pp. 1451–1455.
S. E. Mahmoodi, K. P. Subbalakshmi, and V. Sagar, “Cloud offloading for multi-radio enabled mobile devices,” in Proc. IEEE Int. Conf.Commun. (ICC), London, U.K., Jun. 2015, pp. 5473–5478.
D. Huang, P. Wang, and D. Niyato, “A dynamic offloading algorithm for mobile computing,” IEEE Trans. Wireless Commun., vol. 11, no. 6, pp. 1991–1995, Jun. 2012.
J. Liu, Y. Mao, J. Zhang, and K. B. Letaief, “Delay-optimal computation task scheduling for mobile-edge computing systems,” in Proc.IEEE Int. Symp. Inf. Theory (ISIT), Barcelona, Spain, Jul. 2016,pp. 1451–1455.
S. Chen, Y. Wang, and M. Pedram, “A semi-Markovian decision process based control method for offloading tasks from mobile devices to the cloud,” in Proc. IEEE Glob. Commun. Conf. (GLOBECOM),Atlanta, GA, USA, Dec. 2013, pp. 2885–2890.
S.-T. Hong and H. Kim, “QoE-aware computation offloading scheduling to capture energy-latency tradeoff in mobile clouds,” in Proc. IEEE Int. Conf. Sens. Commun. Netw. (SECON), London, U.K., Jun. 2016,
pp. 1–9.
J. Kwak, Y. Kim, J. Lee, and S. Chong, “DREAM: Dynamic
resource and task allocation for energy minimization in mobile cloud systems,” IEEE J. Sel. Areas Commun., vol. 33, no. 12, pp. 2510–2523, Dec. 2015.
Z. Jiang and S. Mao, “Energy delay tradeoff in cloud offloading for
multi-core mobile devices,” IEEE Access, vol. 3, pp. 2306–2316, 2015.
单用户MEC系统的论文研究点对比
binary offloading
Partial offloading
** stochastic task models**
多移动设备共享同一MEC。
以下是对不同MEC系统的集中型和分散性资源分配的研究:
C. You, K. Huang, H. Chae, and B.-H. Kim, “Energy-efficient resource allocation for mobile-edge computation offloading,” IEEE Trans. Wireless Commun., vol. 16, no. 33, pp. 1397–1411, Mar. 2016.
Joint allocation of computation and communication resources in multiuser mobile cloud computing
Joint optimization of radio resources and code partitioning in mobile edge computing
Latency optimization for resource allocation in mobile-edge computation offloading
Joint offloading decision and resource allocation for multi-user multi-task mobile cloud
Joint offloading and resource allocation for computation and communication in mobile cloud with computing access point
Power-delay tradeoff in multi-user mobile-edge computing systems
Joint energy minimization and resource allocation in C-RAN with mobile cloud
基于游戏理论和分解技术的资源分配推动对分布式资源的研究
假设计算任务分别在本地执行或通过单个和多个干扰通道完全卸载。
在固定的移动传输功率的条件下,为最小化总能耗和卸载时延问题,建立整数规划问题,这是 NP-hard方程(NP-hard :non-deterministic polynomial,指所有NP问题都能在多项式时间复杂度内归约到的问题)
游戏理论技术被应用与开发一个分布式算法从而实现纳什均衡。(纳什平衡(Nash equilibrium),又称为非合作博弈均衡纳什平衡(Nash equilibrium),又称为非合作博弈均衡。在一个博弈过程中,无论对方的策略选择如何,当事人一方都会选择某个确定的策略,则该策略被称作支配性策略。如果两个博弈的当事人的策略组合分别构成各自的支配性策略,那么这个组合就被定义为纳什平衡。)
进一步说,对于每个用户来说,仅当接收干扰低于阈值,卸载会有益处。
Multi-user mobile cloud offloading game with computing access point
Game-theoretic analysis
of computation offloading for cloudlet-based mobile cloud computing
Efficient multi-user computation offloading for mobile-edge cloud computing
基于的系统模型
Multiuser joint task offloading and resource optimization in proximate clouds
Energy-efficient dynamic offloading and resource scheduling in mobile cloud computing
Joint optimization of radio and computational resources for multicell mobile-edge computing
研究前提
用户的同步性和本地-边缘并行计算的灵活性
但是,在接下来对实际的MEC的研究中, 这些要求需要被放宽。
首先,任务到达的异步性引发队列时延,因为服务器按照顺序进行缓冲和计算的。
Joint scheduling of communication and computation resources in multiuser wireless application offloading
Joint subcarrier and CPU time allocation for mobile edge computing
然后,甚至对于同步到达的任务,执行不同类型的应用(比如从时延敏感和时延迟钝的应用)时,用户之间的时延要求也有很大不同,这就要求服务器调度去安排基于时延要求的不同优先等级
Multi-user computation partitioning for latency sensitive mobile cloud applications
Energy-efficient dynamic offloading and resource scheduling in mobile cloud computing
两个优点使其成为有前景的技术:
Exploring device-to-device communication for mobile cloud computing
Device-
to-device-based heterogeneous radio access network architecture for mobile cloud computing
Energy efficient cooperative computing in mobile wireless sensor networks
Energy-traffic trade-off cooperative offloading for mobile cloud computing
Joint computation and communication cooperation for mobile edge computing
Exploiting non-causal CPU-state information for energy-efficient mobile cooperative computing
Computation peer offloading for energy-
constrained mobile edge computing in small-cell networks
异构MEC系统:由一个中心云和多个边缘服务器组成。不同等级的中心云和边缘云之间的合作交互带来了很多研究挑战,最近有很大吸引力的研究是服务器选择、协作和计算迁移。
关键的设计点是,卸载的目的地,例如,是边缘服务器还是中心云
A cooperative scheduling scheme of local cloud and Internet cloud for delay-aware mobile cloud computing
A game theoretic resource allocation for overall energy minimization in mobile cloud computing system
Offloading in mobile edge computing: Task allocation and computational frequency scaling
通过服务器协作进行的资源共享不仅能够提升资源利用率和提高运营商受益,而且能够提高用户的体验。
A framework for cooperative resource management in mobile cloud computing
Decentralized and optimal resource cooperation in geo-distributed mobile cloud computing
Proactive edge computing in latency-constrained fog networks
延迟接受算法是约会配对问题,非常有趣
计算迁移主要产生于卸载用户之间的移动性。当用户移动到接近新的MEC服务器时,网络控制器将会选择将计算迁移到新的服务器中进行,或者在原来的服务器计算并将计算结果通过新的服务器发送给用户。
Mobility-induced service migration in mobile microclouds
Dynamic service migration and workload scheduling in edge-clouds
Joint offloading decision and resource allocation for mobile cloud with computing access point
1.为了减少计算总时延,将时延不敏感和计算量巨大的任务卸载到远程中心云服务器,而时延敏感的在边缘服务器计算;
2.服务器协作能够提升MEC服务器的计算效率和资源利用率,更重要的是它能够平衡网络中的计算卸载分布以便去减少总的计算时延同时使得资源被更好的利用。而且,服务器协作设计需要考虑时间和空间上的计算任务到达、服务器计算容量、时变信道和服务器单独受益。
3.在MEC移动性管理中,计算迁移是一个有效的方法。是否迁移取决于迁移开销、用户和服务器之间的距离、信道条件、服务器计算容量。具体的说,当用户远离原来的服务器时,最好将计算任务迁移到用户附近的服务器上。
三项在研究MEC的资源管理中面临的挑战,而且仍未解决。
Two-Timescale Resource Management:为了简单起见,在整个任务执行过程中,无线信道被认为是保持静态的。然而,当信道相干时间远小于时延要求的时候,整个假设是非常不合理的。
Online Task Partitioning:为了易于优化,当前的文献在解决任务分区问题时忽略了无线信道的波动而且在执行程序开始前的获取任务分区判决。在这样脱机的任务分区判决的系统中,信道条件的改变将会导致无效的甚至是不可行的卸载,这将会使得计算性能严重恶化。为了开发在线任务分区策略,应该将信道统计信息纳入到制定的任务分区问题中,这明显属于NP-hard问题,即使在静态信道的情况下
Collaborative task execution in mobile cloud computing under a stochastic wireless channel
Online placement of multi-component applications in edge computing environments
Large-scale convex optimization for dense wireless cooperative networks
与基站选择问题不同, 因为the optimal placement of edge servers is coupled with the computational resource provisioning
应该考虑的两个方面 the system planners and
administrators should account for two important factors:
计费和计算需求
存在问题与解决方法
计算需求较高的地区应安装更多MEC服务器,但这种地方的租费贵,将MEC部署在已存在的基础设施(微基站)的位置是非常有前景的建议。
然而,这未能解决问题是,在macro cells中,由于差的信号质量,用户体验效果难以保证。
Femtocells: Past, present, and future
Modeling and analysis of K-tier downlink heterogeneous cellular networks
另外,并非所有计算热点的位置都有通信设备。 所以要部署有无线收发器的边缘服务器。we need to deploy edge servers with wireless transceivers by properly choosing new locations.
而且,高租金的地方,最好分配大的计算资源以服务更多的用户,获得更大的收益。
MEC系统效率非常依赖架构,需要考虑的方面包括:workload intensity and communication rate statistics
未来的移动计算网络被设想为由三层组成:cloud, edge (a.k.a. fog layer), and the service subscriber layer,如图所示
通过蜂窝网络中异构网络的分层类推,从直觉上,异构MEC系统也由多层组成,MEC系统的分层结构不仅保留异构网络带来高效传输的优点,而且可以通过将计算工作量分布部署到多层从而具有处理峰值计算任务的能力。但由于需要考虑其他因素( workload intensity, communication cost between different tiers, workload distribution strategies, etc),计算容量供应的问题尚未解决。
可以使用非专用的计算资源(比如:笔记本电脑、手机等)来进行专用的计算,提高计算资源的利用率,减少了部署的费用。但是这会出现资源管理和安全的问题,由于它的专一和自组织性质。
An approach to ad hoc cloud computing
A stochastic workload distribution approach for an ad hoc mobile cloud
AMCloud: Toward a secure autonomic mobile ad hoc cloud computing system
Vehicular fog computing: A viewpoint of vehicles as the infrastructures
服务器密度要迎合使用者的需求(与the infrastructure deployment costs and marketing strategies有关)
用几何理论对MEC系统进行性能分析是非常可行的,但需要一下几点问题:
Energy-optimal mobile cloud computing under stochastic wireless channel
Delay-optimal computation task scheduling for mobile-edge computing systems
An enhanced community-based mobility model for distributed mobile social networks
缓存是为了使内容接近最终用户,MEC是部署边缘服务器来处理边缘的计算密集型任务用户增强用户体验。cache-enabled MEC是将两者融合。
两种可行的方法:
现在很多应用程序都涉及基于数据分析的集中计算。通过智能数据缓存技术可以解决这个问题,即存储频繁使用的数据库。更进一步,缓存可能被其他用户重复使用的计算结果,这更进一步提高了整个MEC系统的计算性能。
Modeling and characterizing user experience in a cloud server based mobile gaming approach
Collaborative multi-bitrate video caching and processing in mobile-edge computing networks
对于在单个MEC边缘服务器的数据缓存:一个关键的问题是在大量的数据库和有限的存储资源之间的折中。MEC系统中的数据缓存会对计算精度,延迟和边缘服务器能耗产生多种影响,但尚未在现有文献中表征。
进一步,建立一个数据库欢迎度分布模型(database popularity distribution model)能够统计地描述对不同MEC应用的不同数据库设置。
On the modeling and analysis of heterogeneous radio access networks using a Poisson cluster process
在一些应用中,用户的信息(位置与个人喜好)提高了边缘计算器处理用户计算请求的效率。但用户的移动性为实现随时的和可靠的计算带来困难,原因如下:
1. 异构。频繁在多种边缘服务器(多种系统配置和用户服务器协作政策)中的切换是复杂的。
2. 干扰。用户在不同小区之间移动将会引发干扰,会使传输性能下降;
3. 时延。频繁的切换会提高时延并恶化用户体验。
以下文献致力于设计移动感知的MEC系统:
Mobility-assisted opportunistic computation offloading
Offloading in mobile cloudlet systems with intermittent connectivity
User mobility model based computation offloading decision for mobile cloud
MuSIC: Mobility-aware optimal service allocation in mobile cloud computing
Efficient mobility and traffic management for delay tolerant cloud data in 5G networks
Edge caching with mobility prediction in virtualized LTE mobile networks
Mobility-aware caching for content-centric wireless networks: Modeling and methodology
接下来介绍一系列的有趣的研究方向:
传统的移动计算卸载设计弊端:
解决方法:
Live prefetching for mobile computation offloading
尽管减少了切换时延与更高效地卸载,但带来新的问题:
1. 轨迹预测模型需要在复杂与精准中做折中;
2. 预取数据的选择;通过自适应的传输功率控制,提前读取集中计算部分;
D2D通信优点:提高网络容量、减轻蜂窝系统中数据流量负担、减少传输功耗(因为短距离传输)。
D2D在移动性中的问题:
1.如何综合D2D通信和蜂窝系统的优点;
One possible approach:可以将集中的计算任务卸载到基站侧的边缘服务器(具有大的计算容量),以便减少服务器计算时间;同时大数据量的和精确计算的请求通过D2D通信被邻近的用户取回,从而有较高的能量效率;
2. 考虑到 users’ mobility information, dynamic channels and heterogeneous users’ computation capabilities,应该优化用于卸载的周围用户的选择。
3. 大规模的D2D通信链路将会引入干扰,所以可以应用引入干扰消除和认知无线电技术(advanced interference cancellation and cognitive radio techniques )
场景:对于时延敏感和资源要求(resource-demanding )苛刻的应用程序,任何的计算错误会带来严重的后果
包括三个主要部分:错误预防、错误检测、错误恢复。
Fault prevention:
通过使用其他的可靠的卸载链路来避免和防止MEC错误;宏基站和中心云都能被选为防护云,因为它们有大范围网络覆盖允许持续的MEC服务。关键的设计挑战是:如何在QoS(比如:发送错误的可能性)和由于单个用户的额外卸载链路的能量消耗间进行折中,以及多用户MEC应用如何分配保护云。
fault detection:
收集错误信息,可以通过设置智能及时检测和MEC服务的接收反馈来实现。使用channel and mobility estimation techniques 去评估错误达到减少错误检测时间的目的。
** fault recovery**:
针对检测到的错误,执行恢复技术可以达到持续并加速MEC服务的效果。因错误而暂停的服务可以被切换到更加可靠的在高速卸载情况下具有自适应的功率控制的后备无线链路中。其他方法: migrating the workloads to neighboring MEC systems directly or through ad-hoc relay nodes in
Recovery for overloaded mobile edge computing
在动态环境下的移动性多用户MEC系统需要自适应的服务调度。这种调度计划可以结合实时用户信息而且不时重新生成调度顺序。
解决:
在动态调度机制下,具有差的条件的用户将会被分配一个高的卸载优先级以满足截止时间条件。
设计移动感知的优先级卸载函数:由以下两个步骤实现:第一步是去精确地预测用户的移动性配置和信道条件,这里主要的挑战是:移动性效果和卸载优先级函数之间的映射;
Optimization of resource provisioning cost in cloud computing
Reservation-based resource scheduling and code partition in mobile cloud computing
考虑节能主要设计方法有:MEC能量均衡的动态精简、MEC的地理卸载均衡、可再生能源在MEC系统的使用。
energy-proportional (or power-proportional) servers:
服务器的能耗应与其计算负载成比例。
反例:一种实现服务器能量均衡的方法是对计算量较少的MEC服务器进行关闭或者减慢操作,但是伴随着节能的同时,服务器在开关两种状态之间的切换会带来一些问题:额外开销与时延、体验恶化、开关的磨损风险增加。总之,这不是一个好的方式。
**解决:**为了实现有效的动态精简均衡,每个边缘服务器的计算工作量概括需要被准确地预测。MEC服务系统来说,许多原因导致它的工作负载模式多变,所以这要求更精确的预测技术。此外,在线动态均衡算法需要较少的预测信息,需要被发展。
Online algorithms for geographical load balancing
Temperature aware workload management in geo-distributed data centers
利用工作负载模式,温度和电价的空间多样性,在不同的数据中心之间做出工作负载路由决策
例子:一个MEC服务器集群为一个移动用户提供服务,一方面会提升任务量较少的服务器的能量效率和用户的体验,另一方面会延长移动设备的电池寿命。
另外,GLB的实现要求有效的资源管理技术。
应用GLB需要考虑的因素:
energy harvesting (EH)技术在mec中的可行性:1. the MEC servers are expected to be densely-deployed and have low power consumption
2. EH is able to prolong their battery lives;3. eliminates the need of human intervention;
可再生能源功率的MEC系统主要考虑的问题有:绿色感知能量资源分配和计算卸载。
代替满足用户体验的情况下最小化能耗的方式,而对于可再生能源功率的MEC系统的设计原则需要改为在可再生能源能量的限制下优化系统性能。
energy side information (ESI)
Online learning for offloading and autoscaling in renewable-powered mobile edge computing
Dynamic computation offloading for mobile-edge computing with energy harvesting devices
可再生能量的随机性为系统带来不稳定性,以下是一些解决方法:
Transmit power minimization
for wireless networks with energy harvesting relays
Online algorithms for geographical load balancing
Optimal power allocation for energy harvesting and power grid coexisting wireless communication systems
On optimizing green energy utilization for cellular networks with hybrid energy supplies
Grid energy consumption and QoS tradeoff in hybrid energy supply wireless networks
Enabling wireless power transfer in cellular networks: Architecture, modeling and deployment
Energy efficient mobile cloud computing powered by wireless energy transfer
应用: the computation offloading for mobile devices in MEC systems
Energy efficient resource allocation for wireless power transfer enabled collaborative mobile clouds
应用:data offloading for collaborative mobile clouds
MEC的特点带来的在安全与私隐方面的问题:
问题:由于不同类型的边缘服务器来自于不同的供应商,使得传统的信任认证机制不可用,并且由于很多边缘服务器服大量移动设备,这使得信任认证机制比起传统的云计算系统变得非常复杂。最小化认证机制与设计的分配策略的开销十分困难。
问题:MEC系统中,不同的网络(比如:WiFi、LTE、5G都有不同的信任域),在现有的解决方案中,认证机构只能将证书分发给位于其自己的信任域中的所有元素,这使得很难保证不同信任域中通信的隐私和数据完整性。
解决:为了解决这个问题,我们使用密码属性( cryptographic attributes)作为信任证书去交换回话密钥。也可以使用定义多个信任域之间进行协商和维护域间的信任证书的联合内容网络(federated content networks)的概念。
Providing security in NFV: Challenges and opportunities
Non-interactive verifiable computing: Outsourcing computation to untrusted workers
为了实现安全和隐私的计算,边缘平台在执行计算任务时不需要知道原始的用户数据,并且计算结果需要被认证,这可以通过加密算法和认证计算技术( encryption algorithms and verifiable computing techniques)实现。
例子
Secure optimization computation outsourcing in cloud computing: A case study of linear programming
除了上述之外,用户的移动性、应用和流量迁移、连接和存储的要求。
Video Stream Analysis Service :
广泛应用包括:车辆识别、脸部识别、家居安全监控
MEC for video stream analysis
the edge server should have the ability to conduct video management and analysis, and only the valuable video clips (screenshots) will be backed up to the cloud data centers
Augmented Reality Service:
应用: museum video guides、Online games
MEC for AR services
例子:
Intel mobile edge computing technology improves the augmented reality experience
UTM infrastructure and connected society
其他: active device tracking, RAN-aware content optimization, distributed content and Domain Name System (DNS) caching, enterprise networks, as well as safe-and-smart cities
To integrate MEC in 5G systems, the recent 5G technical specifications have explicitly pointed out necessary functionality supports that should be offered by 5G networks for edge computing, as listed below:
Support ServiceRequirement: QoS特性(在资源类型,优先级方面,分组延迟预算和分组错误率),描述了QoS流在UE和UPF之间边缘到边缘接收的分组转发处理,它与 5G QoS指标(5QI: 5G QoS Indicator)结合
Technical specification group services and system aspects; System architecture for the 5G systems; Stage 2 (Release 15)
Advanced Mobility Management Strategy: 引入移动模式来设计5G系统的移动性管理策略。 移动模式在无线通信系统中设计先进传输方案和MEC应用起到重要作用。
Capability of Network Slicing: 网络切片是一种允许在公共共享物理基础架构之上创建多个网络实例的敏捷和虚拟网络架构。凭借5G系统中网络切片的能力,MEC应用可以得到优化的专用网络资源,这有助于大大减少接入网络的延迟,并支持MEC服务用户的密集接入。