Table 1 summarizes the achieved results in relation to the main technical challenges, together with the relevant operational objectives addressed by the project, as reported in Annex 1 – “Description of Action” (DoA). The analysis on how the INPUT project met the targeted key performance indicators reported in the project proposal Objectives is reported in the text after Table 1.


Technical Challenge #1: Ground-breaking Personal Cloud Services

Obj. ID

INPUT Objectives


To enable next-generation Personal Cloud Services to entirely or partially replace users’ physical appliances, or to add (potentially infinite) smartness and capabilities beyond their physical hardware capacity as a service (SDaaS), through the virtual device image approach (i.e., the “Things virtualization and management as a service” paradigm). The virtual image will allow to reduce the carbon footprint of completely and partially virtualized appliances to 50% and 75%, respectively.

Project Achievements:

As described in more detail in the WP2 and WP4 deliverable reports (D2.1, D2.2, D4.2, D4.3, and D4.5), the INPUT platform has been specifically designed for enabling the management of Personal Cloud Services to be extremely flexible, automated, and scalable.


Regarding flexibility aspects, the INPUT platform allows Service Providers to create their Personal Cloud Service templates as a graph of interconnected IaaS/PaaS components to be deployed into the Telecom Operator edge infrastructure. Some of such components are envisaged to directly attach to the personal network of the End-Users to act as Virtual Image of the service(/virtualized smart device) or to connect at layer 2 with the objects in the user home (e.g., sensors, actuators, TVs).


Passing to automation, each component of a Personal Cloud Service (i.e., a Service_App) is associated with a number of metadata defining a set of automated operations to be performed by the INPUT platform. Among this metadata, the most relevant one is the “proximity class”: a parameter defining the maximum tolerable distance of the component from the end-user (or from other service components). In case this parameter is not satisfied, the INPUT platform automatically migrates the component to a closer computing facility meeting the proximity requirement.


Finally as far as scalability aspects are concerned, it is worth noting that Personal Cloud Service templates are defined once, and deployed into multiple per-user service instances, whose lifecycle can be individually managed by a third-party application orchestrator through the OpenStack-like interfaces at the north-bound of the INPUT platform. Scalability aspects are further reinforced by SDN multi-center overlays and hardware offloading mechanisms introduced in the remainder of this section.

All these capabilities integrated into the INPUT platform enable wide degrees of freedom in the design and in the management of personal cloud services, with the total or partial virtualization of any appliances and supporting innovative features, also beyond the ones of the two demonstrated use-case applications, namely “Virtualization of existing End-User Electronic Devices” and “Virtualization of IoT Services in a Home Management System: Virtual Collector Device.”

The above use-case applications have been specifically selected for the INPUT platform demonstration since they are highly representative of services with different nature, performance requirements, mobility and scalability needs.

The prototypes of these use-case applications, developed almost completely from scratch during the project life, have been made running on the INPUT platform to evaluate and validate the performance of the INPUT core components, as well as to show the advantages and the benefits provided by the same platform to the applications.


To provide users with access to their virtual and physical devices (nominally deployed in their homes) always and in any location through virtual cloud-powered Personal Networks (PNaaS) that will enable trusted and secure incorporation of resources and services independently of their location across distributed computing and storage infrastructures.

Project Achievements:

Personal Networks have been defined since the D2.1 report in 100% compliant fashion with respect to the declared objectives, and in particular as:

a secure and trusted virtual overlay network that is able to interconnect the smart devices of a user with standard L2 protocols and operations equivalent to the ones today available in the user’s home network, independently of their location (inside/outside the user’s home) or their nature (physical/virtual).


These virtual overlay networks have been powered through a novel SDN mechanism, named Multi-Center Overlay, specifically conceived to meet Personal Network connectivity requirements according to the INPUT platform operations and service-level needs.

The Multi-Center Overlay mechanism has been firstly introduced in the Annex C of the D2.1 report, and it has been fully specified in the Annex A of the D2.3 report. It supports not only the interconnectivity among the physical home network and the mobile terminal of the end-user, but also the interconnectivity towards the Virtual Images of the activate Personal Cloud Services. Multi-Center Overlay networks has been specifically design to boost the scalability and efficiency levels of the INPUT platform operations, since it allows clustering Service_Apps with similar proximity requirements in the same overlay center, and it enables bulk seamless live migrations of the Service_Apps bound to the same overlay center among in-network computing facilities.

As described in the D2.3 report, the mapping of the overlay center into the computing facilities has been realized by means of a fast on-line heuristic algorithm running in the INPUT NS-OS, part of a more complex policy-driven strategy based on parametric optimization concepts running on the INPUT NS-MAN.


In addition to the previous project outcomes, WP3 activities contributed by designing a highly scalable virtual home gateway, novel firmware for FPGA boards providing advance hardware offloading capabilities (e.g., encryption/decryption per traffic flow, tunnelling, etc.) through extended OpenFlow interfaces, and a couple of Virtual Network Functions (VNFs) to attach the INPUT platform between the 4G access and core networks (see the D3.2 report).

In detail, the virtual home gateway has been realized through lightweight and highly scalable multi-context VNFs capable of migrating among network Points of Presence (PoPs) in a seamless fashion. The virtual home gateway has been equipped by classical security functions like firewalling and Network Address Translation (NAT).

Regarding the VNFs to attach the INPUT platform with the 4G network, they have been designed to intercept LTE S1-AP signalling for identifying the mobile terminals, their handovers among cells, as well as to redirect data-plane traffic (carried in LTE S1 GTP tunnels) from/to the personal network to/from the mobile terminal.


All the outcomes previously listed have been integrated into the software prototypes delivered by WP2 and WP3 (see the D2.4 and the D3.5 reports), and they have been applied to the final INPUT demonstrator as core elements of the platform (see the D4.4 and D4.5 reports).


The positioning of the INPUT platform against the Network Functions Virtualisation environment and standard reference deployment scenarios, reported in the D5.6 report, has been a further key project contribution. This contribution has been comprehensively finalized and validated into the final demonstration prototype, where the INPUT platform has been integrated within a NFV-compliant deployment of a 4G mobile network (see again the D4.4 and D4.5 reports).


To support cloud service federation to build virtual images of devices.

Project Achievements:

As defined in the D2.1 and D4.1 reports, services, and fully specified in the D2.2 report, will be activated by a web/REST interface provided by the virtual home gateway. Upon their activation or even during their normal operations, services can be enabled to request other services, even of different nature (IaaS/PaaS), or to confederate with resources in public datacenters.

In this respect, the INPUT Consortium included such technical solutions into the final demonstrator in the use-case “Virtualization of IoT Services in a Home Management System: Virtual Collector Device.” In this case, the main home management application (defined as an IaaS service) requests “Virtual Objects” (PaaS instances) as new sensors is enabled.


To design and develop cloud services able to reduce the carbon footprint and increase the sustainability of ICT technologies at both the Operator and user sides.

Project Achievements:

The main outcomes achieved by the INPUT project regarding sustainability aspects can be summarized in i) the design, development and demonstration of an advance consolidation mechanism, ii) the design of a set of power management policies to be used on computing servers and network devices, iii) the design, development and demonstration of novel hardware power modulation functionalities for FPGA-based SDN switches, iv) the design of a new extended version of the Green Abstraction Layer ETSI standard for NFV environments, as well as v) the realization of models for estimating the potential impact of service- and infrastructure-level decisions on green-aware metrics.

Focusing on the “Green Abstraction Layer” interface, the extended solution designed by the INPUT Consortium, in collaboration with Orange, explicitly considered the ETSI NSF MANO architecture to provide a sort of “back pressure” on the energy consumption in highly virtualized environments, where infrastructure owners are decoupled from service providers. In more detail, the infrastructure providers can use the extended GAL interface to encourage the service providers to adopt a fairer approach in the use of their rented resources (also by associating energy-aware states to VNFs) through economic incentives.

As far as the consolidation algorithm is concerned, the project proposed an original heuristic mechanism able to achieve energy savings very close to the optimal ones, but limiting the number of needed virtual machine migrations among datacenter servers.

All the above outcomes, except the consolidation algorithm (part of the D2.3 report) have been reported into the details in the D3.4 report.


Technical Challenge #2: Ground-breaking Personal Cloud Services

Obj. ID

INPUT Objectives


To add “in-network” programmability into network edge devices, in order to:

·     overcome current limitations due to the “ossification” of network technologies;

·     To enable edge network devices to host cloud applications capable of cooperating with, of offloading, or of even completely replacing users’ Smart Devices.

Project Achievements:

It is well known that Edge Computing is becoming one of the key technological pillars (along with technologies like NFV and SDN) of the upcoming softwarization revolution in the telecommunications field, which is leading to the specification of the 5th Generation mobile networks.

In this respect, one of the main achievements of the INPUT project has been the design, the early development, and the demonstration of a complete Edge Computing framework prototype. One of the central elements of the INPUT platform, the OpenVolcano open-source project, was specifically conceived and developed during the project lifetime, to enable advance Edge Computing functionalities.

As outlined in the progresses related to the objectives O1.1 and O1.2, the INPUT Consortium has addressed the challenge of providing private instances of Personal Cloud Services to be attached to the user’s Personal Network through their Virtual Images, and to be deployed in the edge infrastructures of Telecom Operators.

End-users’ Personal Networks, as well as service Back-end Networks, have been powered by the extremely scalable and efficient multi-center SDN overlay mechanism (see the Annex A of the D2.3 report), enabling fast and seamless reconfigurations of virtual networks upon migrations of virtual service instances or mobile network hand-over events of end-users.

In the final demonstrator (see the D4.4 and D4.5 reports), Personal Networks have also been “attached” to NFV-ready 4G access and core networks by means of specific VNF produced by the INPUT Consortium (see the D3.3 report), a general positioning and integration guidelines towards the NFV ecosystem has been provided (see the D5.6 report).

In addition to the points above, the INPUT project, especially in WP3, also focused on hardware offloading aspects by designing, among other results, FPGA-based SDN switches capable of performing encryption, decryption, tunneling operations, customizable on a per-flow basis (see the D3.3 report).


To vertically integrate cloud services and network technologies:

·     To support advanced network functionalities (e.g., Personal Networks) and per-user customization in a scalable, trusted and secure way through the extension of SDN/NFV paradigms.

·     To exploit advanced power management schemes in smart programmable devices for the achievement of very high energy-efficiency levels producing OPEX savings up to 60% (with respect to a scenario without consolidation criteria and power management primitives).

Project Achievements:

As widely discussed in O1.2 and O1.4, the INPUT project devised a significant effort to integrate personal services and network technologies at the edge facilities of Telecom Operators.

This effort has been also reflected into the whitepaper, jointly prepared with the H2020 5G-PPP MATILDA Innovation Actions, on the positioning and integration aspects between the NFV and Edge Computing ecosystems (see the Annex A of the D5.6 report), and in the integrated prototype realized for the final project demonstration (see the D4.4 and the D4.5 reports).

Beyond this final outcomes, the project has produced a wide set of solutions for boosting the scalability for personal cloud services and related networking (NFV/SDN) technologies (both as direct or indirect contributions to the INPUT platform). Self-explanatory examples of such solutions are the above-described multi-context VNFs, the Multi-Center Overlays, the hardware offloaded VNFs on the FPGA-based SDN switches, and the service chain templates (see the D2.2, D3.3 and D3.2 reports).

Moreover, Personal Networks has been designed to provide trust and security levels similar to the ones in today’s home networks.

Regarding energy-efficiency aspects, the description of achievements in the O2.1 already provides a comprehensive survey on the project results in this respect.


To reduce the latency (of a factor up to 50%[1]) of overlying cloud applications and to enhance the users’ QoE.

Project Achievements:

While the achievement of numeric thresholds for key performance indicators is discussed in the text after this table, it is worth noting that the INPUT project designed, developed, and validated a number of solutions and mechanisms to reduce the end-to-end latency, and increase the perceived levels of Quality of Service (QoS) and QoE. Among these outcomes, the key contributions can be summarized in:

·       The introduction of the “proximity class” concept and its integration into the metadata associated to each Service_App (see the D2.2 report);

·       The mapping of proximity classes with centers in the multi-center overlay networks (see the D2.2 and D2.3 reports);

·       The design and the development of orchestration algorithms for optimally placing centers and bound Service_Apps in the facilities close to end-users (also considering their mobility profiles – see the D2.3 report);

·       The design and the development of several mechanisms for monitoring the QoE of multimedia services to be embedded into service chains, VNFs or Virtual Machines (VMs) (see the D3.2 and D3.3 reports).

In addition, also specific OpenVolcano components acting at the data-plane (e.g., the Quake software switch) have been developed and optimized to provide high performance and low-latency packet forwarding. For instance, hardware offloaded flow steering solution in Quake allows to avoid time costly cache consistency operations in computing servers (see the D2.2 and D2.3 reports).


To design a modular architecture for programmable infrastructures and devices by designing and defining open SDN, NFV and NBI protocols and APIs.

Project Achievements:

The INPUT platform has been designed from its preliminary definition in the D2.1 report as a highly modular ecosystem relying on state-of-the-art interfaces, which allow to interchange INPUT-native modules with equivalent third-party ones (e.g., the OpenVolcano Quake software switch can be replace by Open VSwitch).

The same OpenVolcano, acting as NS-OS, has an extremely modular architecture adopting open and flexible REST interfaces for internal communications.

In detail, nine interfaces have been identified and specified (see O3.1 and O3.2) along with the main roles of stakeholders and the INPUT platform building blocks.

In order to effective support the advance capabilities provided by the INPUT platform, most of the interfaces have been extended by the INPUT project or, in some cases created from scratch (see especially the D2.2 report).

A comprehensive information model maintained by the INPUT platform control plane has been designed and publicly released.

Details on these new or extended interfaces have been publicly provided in deliverable reports, in scientific papers, as well as proposed to standardization bodies (e.g., the extended “GAL” interface for NFV ecosystems at the ETSI). Moreover, the implementation of most of the designed interfaces is available in the as open-source code in OpenVolcano (see the D2.4 and D3.5 reports).


Technical Challenge #3: Abstraction, Virtualization and Dynamic Provisioning of Resources

Obj. ID

INPUT Objectives


To define SouthBound Interfaces (SBI) based on the SDN and NFV paradigms to support novel “in-network” programmable devices, power management and a vertical integration of personal cloud services.

Project Achievements:

Beyond the positioning against the NFV ecosystems, as well as the definition of possible integration strategies (see the Annex A of the D5.6 report), the INPUT project actively worked on the following SouthBound Interfaces:

·       SDN/OpenFlow extensions and abstractions, including the aforementioned multi-center overlay networks, the extensions to support hardware offloading and power management capabilities (see the D2.2 and the D3.1 reports).

·       APIs for the lifecycle management of Service Apps and Network Functions, including the “multi-context processes” for Virtual Network Functions, and the unified API abstraction for managing the lifecycle of IaaS VMs and PaaS (CapeDwarf) instances (see the D2.2 report).

·       The INPUT information model (see the D2.2 report), which represent the underlying meta-model used in the INPUT NS-OS – NS-MAN interface.

·       The extension of the ETSI GAL standard towards NFV ecosystems (see the D3.4 report).

·       The ENM transformation engine (see the D2.2 report)

Moreover, it worth noting that the largest part of above interfaces have also been implemented, integrated, and used in the final INPUT platform demonstration.


To define an “open,” consistent and complete set of NorthBound Interface (NBI) protocols to expose the “in-network” programmability and novel capabilities to Service Providers.

Project Achievements:

The INPUT project defined, implemented, and integrated in the final demonstrator the following NorthBound Interfaces:

·       The End-User interface: to enable end-users to configure their personal network and to subscribe to personal cloudservices;

·       The Service Provider Interface: to enable cloud service providers to define and upgrade IaaS and PaaS personal cloud service templates, and to manage the lifecycle of any activated service instance;

·       The Network Operator Management Interface: to enable the network operator to manage and monitor the network and computing infrastructure.

The roles of these interfaces in the entire ecosystem and the information to be carried were completely specified since the definition of the INPUT platform architecture (see Sect. 6.2 of the D2.1 report).

The End-User interface with all the necessary functionalities has been developed by extending and integrating the OpenWRT operating system, realizing the control plane of the virtual Home Gateway for Personal Networks (see the D4.2 report) in the OpenVolcano platform (see Annex C of the D3.1 report).

Regarding the OpenStack interface, the main OpenStack-like computing and networking APIs have been integrated into the “Pyroclast” component of the OpenVolcano platform (see Annex C of the D3.1 report). Pyrocast has been developed in order to interact with the other OpenStack modules as the original computing and networking modules, so that it allows a perfect integration with “external” OpenStack projects or with any software compliant with OpenStack APIs (see the D2.2 report).

The bypassing of the original computing and networking modules has allowed bringing the service chains defined by Service Providers and to include them in the service catalogue as “service templates” before they would be instantiated (see the D2.2 report). This way, through an extended version of the OpenStack Dashboard (Horizon), cloud Service Providers can define their “in-network” service chain templates, and specify which modules (i.e., Service_Apps) will be connected to the end-user Personal Network or placed into separated Back-End Networks (see the D2.2 report).

In addition to the points above, as described in the D2.2 and the D4.5 reports, the OpenStack interface has been extended to:

·       host metadata related to the aforementioned “proximity classes,” to the nature of virtual machines (e.g., shared or personal), and to configure additional offloading features (e.g., hardware offloaded encryption and decyption).

·       provide direct REST APIs to manage the lifecycle of any activated (per-user) service instance.


To design the following mechanisms and criteria for dynamic resource provisioning and energy management:

·     Consolidation Criteria for the re-configuration of the smart infrastructure to meet the estimated workload and user/service requirements with the minimum possible level of energy consumption.

·     Orchestration Mechanisms to dynamically migrate “in-network” Apps without causing any service interruption or performance decay.

·     A Monitoring sub-System to collect performance measures and alerts, which include network-, App-, and power-aware performance indexes.

Project Achievements:

As described in sect. 1.2.2 in more detail, the INPUT project devised a huge effort in designing a wide set of monitoring, analytics, orchestration and consolidation algorithms, whose scientific quality is somehow underlined by the number of publications produced, and whose solidity/applicability proven by the integration of a large part of them in the project demonstrator.

In detail, as reported in the D2.3 report, the project produced 5 main monitoring and analytics algorithms/mechanisms to be used for evaluating/estimating/predicting various key performance indicators at both the service and the infrastructure levels; 2 main orchestration algorithms (one of them integrated in its more sophisticated version into the OpenVolcano Vent module and in the Ericsson Network Manager, and applied in the final demonstration), and 1 consolidation algorithm (integrated in Vent for the final demonstration, too).

A comprehensive performance evaluation of the selected orchestration and consolidation algorithms can be found in the D4.4 report.


Technical Challenge #4: Demonstrate potential of INPUT-based Cloud services

Obj. ID

INPUT Objectives


The INPUT Project will set up a demonstrator with proof-of-concept implementations of its main achievements, including:

·     Some instances of the “in-network” programmable device prototype based on the DROP open-source project.

·     A proof-of-concept implementation of the INPUT core framework.

·     The two proof-of-concept implementation of personal cloud services selected as use-cases that will run on several users’ personal networks.

Project Achievements:

As clearly reported in the D4.4 and D4.5 reports, the demonstration activities fully achieved O4.1 targets, and somehow went beyond the expectations thanks to the completeness of the final demonstrator, and to the non-planned early-bird demonstrations carried out during the entire project life, namely: “Multi-context Network Functions,” “Virtual Multimedia Set-Top Box,” and “Racing with Remote Drones” (winning the best demo award at the 2017 IFIP/IEEE Symposium on Integrated Network and Service Management).

In detail, the final project demonstrator included:

·       a complete prototype of the INPUT platform, composed of the OpenVolcano and the Ericsson Network Manager software frameworks, including all the main functionalities designed in WP2 and WP3;

·       the two selected use-case applications related to multimedia and IoT services;

·       a couple of FPGA-based OpenFlow switches with the offloading capabilities designed in the project;

·       a hardware testbed including 3 hardware OpenFlow switches (virtualized in multiple instances) and about ten servers, which have been configured to represent an edge network with 4 Points of Presence and 8 aggregation access switches;

·       A smart TV and a couple of sensors to represent the smart devices in the end-user home.

·       A 4G-connected tablet acting as end-user mobile terminal, and a 4G connected sensor (representing a sensor in the car of the end-user).

·       3 LTE base-stations (eNodeBs), realized with Software Defined Radio boards and micro-PCs, and a software-based implementation of a 4G Enhanced Packet Core (eNodeB and EPC have been provided by third-party software applications);

·       A couple of VNFs (provided by the project) to attach the INPUT platform between the eNodeBs and the EPC, as well as to retrieve relevant events from the mobile networks (e.g., user hand-overs);

·       An Ixia traffic generator to inject traffic and measuring performance with high precision.

A number of key performance indicators have been experimentally collected to evaluate the performance levels provided by the INPUT platform, also thanks to an advanced instrumentation of the testbed (e.g., all the servers have been configured to synchronize their clocks at hardware level with the Time Precision Protocol (PTP, or IEEE 1588).

To better highlight the project achievements at any level, the final “live” demonstration has been organized in three main sessions related to the two use-case applications and to the evaluation of INPUT orchestration and consolidation algorithms, respectively. To prove the scalability of the proposed algorithms the third demonstration session has been carried out by using a large emulated network topology with approximately 20 PoPs (including hundreds of servers) and almost 80 network nodes, and with traffic traces coming from public datasets.

The three demonstration sessions have also been captured in videos publicly released by the project.


As evident from Table 1, the final, integrated design of the INPUT platform has fulfilled the targeted functional objectives and further proved the ability to embrace the concepts of Edge and Fog Computing that have emerged during the Project’s lifetime. It is worth noting that, at the proposal stage of the project, the Consortium identified a set of target key performance indicators to be achieved with the application of the INPUT platform and its main functional blocks. Such KPIs are reported in Table 2.

These KPIs, as underlined since the project proposal and highlighted by the Review Experts[2], have an “aggregated” nature, in the sense that their achievement is heavily affected byexternal-to-INPUTfactors, such as: i) the overall Telecom operator infrastructure in which the INPUT platform would be deployed, and ii) the nature and the implementation of the offered cloud services, considering both the virtual images and their chaining, that are offered by the Service Providers.

Given the anticipatory cross-cutting nature of the INPUT project, it has not been possible formulating target KPIs in more platform-oriented fashion less dependent from external factors, since many underlying concepts/aspects has become part of ICT Scientific and Industrial community only after the rise of Edge and Fog Computing paradigms.

Owing to the considerations above, it becomes manifest how the achievement of target KPIs cannot be fully and directly validated by the INPUT final demonstrator, but that a number of “impact models” are needed to project the obtained experimental results to possible evolutionary scenarios, by also highlighting the overall impact according to external-to-INPUT factors. This has been the strategy pursued by the INPUT Consortium.

As described in sect. 8 of the D4.4 report in detail, the fulfilment of the target KPIs mainly relied on two impact models, as well as on the obtained experimental results.

The used impact models are as follows:

·       Impact model 1 (reported in sect. 7.1 of the D2.1 report, and appeared as scientific paper in the SIREN workshop proceedings) analyses under what circumstances replacing physical objects with their virtual images reduces carbon footprint and Green House Gas emissions in a real networking context. To do so, the model takes into account the impact of different levels of virtualization, in presence or absence of power saving mechanisms, on the overall energy efficiency by thoroughly outlining how the carbon footprint varies depending on the virtualization level of a device.

·       Impact model 2 (reported in Annex B of the D5.6 report) provides an analysis of a citywide deployment of in-network Data Centers (DCs) to evaluate how the different deployment scenarios (with more distributed small DCs, or few centralized DCs) affect both the end-user QoS and the Telco provider costs (in terms of both OPEX and CAPEX). The model is based on real open-data made available by Telecom Italia for the metropolitan area of Milan (Italy), and considered nameplate performance of state-of-the-art network access technologies.


Table 2. Target KPIs and mapping of the INPUT achievements.

INPUT target KPIs


Target KPI description


The virtual image will allow to reduce the carbon footprint of completely and partial virtualized appliances to 50% and 75%, respectively.

Mapping of the INPUT achievements:

This mapping has been performed by considering the use-case virtual Set Top Box (vSTB) prototype application as example of fully virtualized appliance. Since the developed application provides more functionalities than off-the-shelf set top boxes, it has been decided to consider two versions of the use-case application: the former, called vSTB-, providing functionalities similar to commercial products, and the latter, called STB+, with all the developed functionalities. The two versions differ only on the presence of the “Personal Acquirer” Service_App (considered only in the STB+ case).

According to the Impact Model 1, and under the assumption that the Telecom Edge Infrastructure is composed of medium-range servers (details are in the D4.4 report), in order to achieve the targeted carbon footprint reduction of 75%, a server must host 160 v set top boxes at least.

Considering the CPU and memory requirements of the virtual machines composing the vSTB prototype reported in the D4.2 report and applied in the final demonstration, it came out that taking fixed the average set-top-box utilization of 4 hours per day[3], the achievable carbon footprint savings are approximately 80% and 78% for the vSTB- and vSTB+ cases, respectively. Fixing the gain to the targeted 75%, the estimate suggests that the vSTB- can be used up to 6 hours per day on the average, and the vSTB+ for 4.5 hours per day. It can be noted how, given their prototype nature, further improvements are also possible.

Regarding the partial virtualization case, unfortunately it has not been possible to find realistic parameters on the carbon footprint of IoT devices to be used as terms of comparison in this analysis. This absence of parameters is mainly due to the well-known heterogeneity of device technologies and manufacturing approaches in the IoT field.

Summarizing what discussed above, the Consortium believes that, at least for completely virtualized appliances, the INPUT platform can enable to design personal cloud services able to fully replace/extend the capabilities of physical appliances, and saving more than 75% of their carbon footprint.


To exploit advanced power management schemes in smart programmable devices for the achievement of very high energy-efficiency levels producing OPEX savings up to 60% (with respect to a scenario without consolidation criteria and power management primitives).

Mapping of the INPUT achievements:

The impact on OPEX of consolidation and power management mechanisms of cloud datacenters is well known to be heavily influenced by a number of factors related to the design and deployment of the infrastructure (architectural choices, dimensioning, placement, etc.) and on its workload levels. These factors remain valid also in the Edge Computing scenarios, and are somehow emphasized by new degrees of freedom, such as, for example, the number and the placement of the in-network datacenters in upcoming Telecom Operator infrastructures.

Under this perspective, the Impact Model 2 provides an insight on how different datacenter planning strategies in upcoming Telecom Operator infrastructures can impact on the Total Cost of Ownership (and then in both OPEX and CAPEX), by also highlighting advantages and drawbacks in terms of network/service performance level that can be provided. The obtained results suggest that the use of a single in-network datacenter for a large metropolitan area like Milan in Italy could assure an acceptable trade-off between OPEX and latency. In detail, the end-to-end latency can be limited to few milliseconds, while OPEX savings decrease of 20% with respect to scenarios with more distributed computing resources.

Starting from these learned lessons, the Consortium dimensioned accordingly the network infrastructure to be analysed, and performed a number of tests on the consolidation algorithm proposed in WP2 and fully integrated in the demonstrator. The obtained results (reported in the D4.4 report) show how the INPUT consolidation algorithm, working only on reserved resources, allows to save OPEX figures (due to energy savings) much larger than the 60% target with respect to standard OpenStack operations when the datacenter utilization is approximately lower than 37% of its total capacity.

Further OPEX savings can also be achieved with power management schemes (acting not on reserved resources, but on their actual usage). In this respect, the power management scheme reported in sect. 3.1 of the D3.4 reported showed that a further 10% energy consumption reduction is possible by better consolidating VMs inside servers and using ACPI C- and P-states.

Owing to the aforementioned results, and arguing that datacenters’ workload hardly overcome the aforementioned levels during daily profiles – typical “cloud” datacenters have peak utilization of only 40% with long low-demand periods with utilization levels as low of 5%[4] – it can be concluded that the achievements obtained by the INPUT project can allow to achieve the K2 target.


To reduce the latency (of a factor up to 50%) of overlying cloud applications and to enhance the users’ QoE.

Latency can be driven by a number of factors, both internal and external to the INPUT platform, the most relevant ones being i) the distance between the user and the facility hosting the services, ii) the nature of the service chain, and iii) the efficiency of the involved computing and networking processes.

Most of the gain in latency reduction enabled by the INPUT platform certainly depends on the possibility to host personal cloud services in datacenters deployed inside telecom infrastructure, avoiding to cross the public Internet. In this respect, Cisco estimates that the latency towards public cloud facilities in Western Europe will decrease to 46 ms in 2021[5].

Regarding network infrastructure, the end-to-end latency of LTE access networks is required to stay below 10 ms, and generally to be less than 20 ms up to the core termination. The target value in current 5G technological design is 1 ms[6].

Thus, even in the case of deployment of the INPUT platform in datacenters at the termination of 4G mobile network core, the latency reduction would be approximately 56.5% at least with respect to the Cisco estimate.

Recalling the results of the Impact Model 2 (only considering the latency in the wired network), if edge datacenters would be geographically distributed close to the eNodeBs (i.e., also the case of 1 datacenter per metropolitan area considered in the Impact Model 2) an overall end-to-end latency between 10 to 12 ms could be achieved, which corresponds to a reduction of 78-74% with respect to the public cloud. This estimate is confirmed by the experimental results obtained with the final demonstrator (equipped with the VNFs to intercept the S1 and S1-AP protocols) and described in sect. 5.1 of the D4.4 report.

Obviously, the rise of the 5th Generation mobile networks can further sensibly reduce the latency to edge computing.

Owing to the simple analysis above, confirmed also by experimentation results, this Consortium believes that the most manifest advantages provided by the INPUT platform reside in the scalability level achieved in its main run-time operations. As shown in sect. 4 of the D4.4 report, processes performing service instantiations, virtual overlay network re-configurations, virtual machine placement optimization, re-consolidation of datacenters, take maximum computation times of few tens of milliseconds also in the presence of very complex networks and service chains. This scalability level is clearly fundamental to enable edge computing infrastructures to support extremely low-latency services, and assure incomparable QoE levels. 


Reduced traffic volumes forwarded to datacenters (up to 30%)

Traffic flowing to/from datacenters in the public Internet obviously depends on the nature of the applications. Multimedia applications, apart when used for the storage of private contents, need to receive/send data from/to the public Internet. In IoT applications, the case of services completely hosted on the edge infrastructure can be more widespread, without the need of a counterpart in the public Internet (i.e., DC_Apps). This can happen for example in Ambient Assisted Living applications.

In any case, monitored data can be synchronized to the public Internet datacenter at lower pace and in a more compressed fashion. Cisco estimated that IoT traffic and data could be reduced by 90% by adopting edge computing5.

Owing to the considerations above, the INPUT Consortium tried to get preliminary feedback during the demonstration experimentation. A number of counters of network ports have been monitored to understand the traffic volume remaining in the Telco Operator network, and the one exiting towards “external servers” mimicking the public Internet. At the end of all the experimentations related to use-case applications, it was observed that 64.5% of traffic remained inside the Telecom Operator infrastructure.


Proof-of-concept implementations of two use-case personal cloud services that will run on several emulated users’ personal networks

As demonstrated in sects. 6 and 7 of the D4.4 report, and by the D4.5 report, two use case applications have been realized and tested in a fully working INPUT platform.

In addition to the use-case applications above, it is worth recalling that INPUT also produced seven early bird (non-planned) demonstrations, some of them constituting additional use-case applications (e.g., the “Racing with the Drones” demo winning also the best demo award at 2017 IEEE/IFIP IM Conference).

[1] The achievement of specific performance targets strongly and obviously depends on the nature and the implementation of the offered cloud service. For this reason, the potential performance improvement due the INPUT technologies cannot be represented as absolute numbers, but only a reasonable (and conservative) estimate of its upper-bound is herein reported.

[2] The project proposal and the successive DoA document included the same footnote here reported as footnote 1 at pag. 10: “The achievement of specific performance targets strongly and obviously depends on the nature and the implementation of the offered cloud service. For this reason, the potential performance improvement due the INPUT technologies cannot be represented as absolute numbers, but only a reasonable (and conservative) estimate of its upper-bound is herein reported.

Regarding the comments from the Review Experts, see the answer to Recommendation #3 – R3 – in Sect. 4.

[3] European Commission DG INFSO, Final Report – Impacts on ICT on Energy Efficiency, 2008.

[4] R. Lent, “Analysis of an energy proportional data center,” Ad-Hoc Networks Journal, Elsevier, vol. 25, part B, Feb. 2015, pp. 554-564.

[5] Cisco Global Cloud Index: Forecast and Methodology, 2016–2021 Whitepaper, UTL::

[6] the 5G Infrastructure Association, “5G Vision,” Whitepaper, Feb. 2015. URL:




The INPUT project is funded by the European Commission DG-Connect in the Horizon 2020 Framework Programme. The content of this web site publication is the sole responsibility of the project partners.The information provided on this website has been prepared exclusively for the purpose of providing information about the INPUT project and related activities.The INPUT consortium has tried to ensure that all information provided in this website is correct at the time it was included. However, no representation is made or warranty given as to the completeness, accuracy and constant update of the information contained in this website.By accessing this website, you agree that the INPUT consortium will not be liable for any direct or indirect damage or any consequential loss arising from the use of the information contained in this website or from your access to any other information on the internet via hyperlinks.The copyright in the material contained in this website belongs to the INPUT consortium. The technology or processes described in this website may be subject to other intellectual property rights reserved by the INPUT consortium or by other third parties in various countries. No license is granted in respect of those intellectual property rights.No information contained in this website can be considered as a suggestion to infringe patents. The INPUT consortium disclaims any liability that may be claimed for infringement or alleged infringement of patents. This website is an offer of information by the INPUT project team.