Projekt per år
Sammanfattning
To remain competitive in the field of manufacturing today, companies must make their industrial robots smarter and allow them to collaborate with one another in a more effective manner. This is typically done by adding some form of learning, or artificial intelligence (AI), to the robots. These learning algorithms are often packaged as cloud functions in a remote computational center since the amount of computational power they require is unfeasible to have at the same physical location as the robots.
Using and augmenting the robots with these such cloud functions has usually not been possible since the robots require a very low and predictable end-to-end latency--something which is difficult to achieve when involving cloud functions. Moreover, different sets of robots will have different end-to-end latency requirement, despite using the same network of cloud functions. However, with the introduction of 5G and network function virtualization (NFV) it does become possible. With this technology it becomes possible to control the amount of resources allocated to the different cloud functions and thereby gives us control over the end-to-end latency. By controlling this in a smart way it will become possible to achieve a very low and predictable end-to-end latency.
In this work we address this challenge by deriving a rigorous mathematical framework that models a general network of cloud functions. On top of this network several applications are hosted. Using this framework we propose a generalized AutoSAC (automatic service- and admission controller) that builds on previous work by the authors. In the previous work the system was only capable of handling a single set of cloud functions, with a single application hosted on top of it. With the contributions of this paper it becomes possible to host multiple applications on top of a larger, general network of cloud functions. It also allows for each application to have its own end-to-end deadline requirement.
The contributions of this paper can be summed up by the following four parts:
a) Input prediction: To achieve a good prediction of propose a communication scheme between the cloud functions. This allows a for a quicker reaction to changes of the traffic rates and in the end a better utilization of the resources allocated to the cloud functions.
b) Service control: With a small theorem we are able to show a simplification of the control law derived in the previous work. This can be especially useful when controlling cloud functions that make use of a large number of virtual machines or containers.
c) Admission control: To be able to ensure that the end-to-end latency is low and predictable we equip every cloud function with an intermediary node deadline. To enforce the node deadlines we propose a novel admission controller capable of achieving the highest possible throughput while still guaranteeing that every packet that is admitted will meet the node deadline. Furthermore, we show that the computation necessary for this can be done in constant time, implying that it is possible to enforce a time-varying node deadline.
d) Selection of node deadlines: The problem of assigning intermediary node deadlines in a way that enforce the global end-to-end deadlines is addressed by investigating how different node deadlines affect the performance of the network. The insights from this then used to set up a convex optimization problem for the assignment problem.
Using and augmenting the robots with these such cloud functions has usually not been possible since the robots require a very low and predictable end-to-end latency--something which is difficult to achieve when involving cloud functions. Moreover, different sets of robots will have different end-to-end latency requirement, despite using the same network of cloud functions. However, with the introduction of 5G and network function virtualization (NFV) it does become possible. With this technology it becomes possible to control the amount of resources allocated to the different cloud functions and thereby gives us control over the end-to-end latency. By controlling this in a smart way it will become possible to achieve a very low and predictable end-to-end latency.
In this work we address this challenge by deriving a rigorous mathematical framework that models a general network of cloud functions. On top of this network several applications are hosted. Using this framework we propose a generalized AutoSAC (automatic service- and admission controller) that builds on previous work by the authors. In the previous work the system was only capable of handling a single set of cloud functions, with a single application hosted on top of it. With the contributions of this paper it becomes possible to host multiple applications on top of a larger, general network of cloud functions. It also allows for each application to have its own end-to-end deadline requirement.
The contributions of this paper can be summed up by the following four parts:
a) Input prediction: To achieve a good prediction of propose a communication scheme between the cloud functions. This allows a for a quicker reaction to changes of the traffic rates and in the end a better utilization of the resources allocated to the cloud functions.
b) Service control: With a small theorem we are able to show a simplification of the control law derived in the previous work. This can be especially useful when controlling cloud functions that make use of a large number of virtual machines or containers.
c) Admission control: To be able to ensure that the end-to-end latency is low and predictable we equip every cloud function with an intermediary node deadline. To enforce the node deadlines we propose a novel admission controller capable of achieving the highest possible throughput while still guaranteeing that every packet that is admitted will meet the node deadline. Furthermore, we show that the computation necessary for this can be done in constant time, implying that it is possible to enforce a time-varying node deadline.
d) Selection of node deadlines: The problem of assigning intermediary node deadlines in a way that enforce the global end-to-end deadlines is addressed by investigating how different node deadlines affect the performance of the network. The insights from this then used to set up a convex optimization problem for the assignment problem.
Originalspråk | engelska |
---|---|
Förlag | Department of Automatic Control, Lund Institute of Technology, Lund University |
Antal sidor | 18 |
Status | Published - 2018 apr. 4 |
Publikationsserier
Namn | Technical Reports TFRT-7655 |
---|
Ämnesklassifikation (UKÄ)
- Reglerteknik
Fingeravtryck
Utforska forskningsämnen för ”Achieving predictable and low end-to-end latency for a cloud-robotics network”. Tillsammans bildar de ett unikt fingeravtryck.Projekt
- 2 Avslutade
-
WASP: Autonomous Cloud
Årzén, K.-E. (PI), Maggio, M. (Forskare), Eker, J. (Forskare), Berner, T. (Forskare), Skarin, P. (Forskare), Martins, A. (Forskare) & Millnert, V. (Forskare)
2016/01/01 → 2019/12/31
Projekt: Forskning
-
Feedback Computing in Cyber-Physical Systems
Årzén, K.-E. (Forskare), Eker, J. (Forskare), Maggio, M. (Forskare), Millnert, V. (Forskare), Nayak Seetanadi, G. (Forskare) & Janneck, J. (Forskare)
2015/01/01 → 2018/12/31
Projekt: Forskning