return to:

cathrynpeoples: Paper Abstracts

Peer-reviewed Papers:

Creative Assessment Design on a Master of Science Degree in Professional Software Development ... A MSc conversion degree is one which retrains students in a new subject area. This type of programme opens new opportunities to students beyond those gained through their originally chosen degree. Students entering a conversion degree do so, in a number of cases, to improve career options, which might mean moving from an initially chosen path to gain skills in a field that they now consider to be more attractive. With a core goal of improving future employability prospects, specific requirements are therefore placed on the learning outcomes achieved from the course content and delivery. In this paper, the learning outcomes are focused on the transferable skills intended to be gained as a result of the assessment design, disseminated to a cohort of students on a Master of Science (MSc) degree in Professional Software Development at Ulster University, United Kingdom. Coursework submissions are explored to demonstrate how module learning has been applied, creatively.

Research-based Education on a Master of Science Degree in Professional Software Development ... This article contributes to the narrowly-investigated field of research-based assessment. Research-based assessment supports student learning by offering choice in how it takes place. It is not widely offered, however, for reasons which can include the challenge of marking outputs consistently, and the importance of ensuring that students engage early with the task. The approach presented in this article exploits this technique, and additionally merges it with authentic assessment, where students are involved in the assessment design. The study confirms effectiveness of the approach through the mark profile, in spite of all students not engaging with it at the earliest opportunity. The study also identifies how students became more competent with research-based assessment when reflecting on feedback for a similar piece previously assessed.

A Review of IoT Service Provision to Assess the Potential for System Interoperability in an Uncertain Ecosystem ... A sprawling and uncoordinated Internet of Things (IoT) environment has evolved in an uncontrolled manner where applications and infrastructure are made available as and when desired by operators, and, in many instances, without concern for others co-existing here. There is evidence already of IoT Service Providers changing their operational priorities and, as a result, the IoT may have an uncertain long-term future. This paper is driven by an assumption that an IoT environment with longevity will not be achieved using today's siloed systems; it is believed that these will not naturally mesh together without strategies in place to facilitate this. We therefore examine the reasons why and extent to which IoT technologies are generally not interoperable through the distinct ways that services are made available.

Building Stakeholder Trust in Internet of Things (IoT) Data Services using Information Service Level Agreements (SLAs) ... A Service Level Agreement (SLA) defines a contract between network service providers and consumers, specifying the terms of a service which providers will make available and the conditions which consumers will accept. To date, SLAs have been specified using basic terms, such as availability and network performance, with a consumer being compensated in the event that the service provided does not meet the terms agreed. Given changes in the ways which network services are now made available, however, SLA terms are changing to capture both the differences in service provision and, additionally, in the responsibilities of the parties involved. It is this aspect of information SLAs which we respond to in this work, and we propose a SLA model which accommodates the requirements of these new relationships. We also propose a set of metrics, a selection of which are presented in this paper to demonstrate our concept, and recommend that a selection can be adapted by consumers. Finally, due to the intricate relationships between data consumers and data providers in the IoT environment and the fact that metric adaptation may lead to SLA violation, we discuss SLA conflict resolution through prioritizing non-functional metrics on a per-customer basis.

The Development to an iOS Application for Gym Membership Management with Firebase Integration and Gamification Support ... It has become apparent that a number of health and fitness centers continue to use outdated, inefficient, and ineffective methods when it comes to the management of their members. The technologies and methods at their disposal are neither appropriate nor current, and they often use several distinct IT systems to manage gym memberships. With an inf lux of boutique gyms opening, offering individualized membership and fitness services at a fraction of the cost of regular gyms, it is paramount that bigger gyms overcome their software limitations to retain current members.

A Survey of the Ability of the Linux Operating System to Support Online Game Execution ... "Linux has suffered sluggish home user uptake due mainly to the dominance of rivals, and has seen numerous incarnations as a gaming platform fall flat. Gaming is a particularly sensitive application given its intensive bandwidth and system response requirements; these applications therefore place specific demands on the Operating System platform on which game play is supported. In this work, the ability of the Linux operating system to support execution of online games is explored through a survey of the state-of-the-art in this area. Given the recent increase in cloud-based online gaming, it can be concluded that the time is ripe for more widespread Linux uptake, especially in the gaming domain. This is particularly true today given the amount of exposure to Information Technology across society in general, and ongoing deployment of Internet of Things environments: Linux's open source, modular and freely customisable design may therefore not be as daunting as before, and the unique benefits of this platform may be exploited for the experiences it can bring to applications in general and, specific to the context of this work, players in their game play. This paper makes a unique contribution to the field: Although a number of articles are available within the general area of Linux and gameplay, a thorough survey on this issue has not been seen so far. This is therefore the gap to which this paper contributes."

The Design of a Gamification Algorithm in a Music Practice Application ... "Keeping track of pupils' progress across different instruments and lessons, and what they are meant to be practicing, can be challenging. The typical solution is to use a book in which teachers write notes and pupils record practice. This can, however, easily be lost or become illegible. Furthermore, music education and self-directed practice is one area of education which is not widely gamified, with gamification describing a technique that drives specific human behaviors, motivates users, and has proven success in influencing learning. An application could therefore be created to respond to these needs by recording and tracking music practice whilst also gamifying student learning. An algorithm which accommodates these requirements is presented in this paper."

A Standardizable Network Architecture Supporting Interoperability in the Smart City Internet of Things ... "An increase of 2.5 billion people is expected in urban areas by 2050, when 66% of the world population will reside here. It is therefore reasonable to assume a parallel growth in the smart city Internet of Things (IoT). A challenge, however, is presented in the interoperability between the devices deployed, limited due to the ad hoc and proprietary ways which systems have been rolled out to date. A standardized network infrastructure specific to the IoT can work towards resolving the challenges. This approach to operation, however, raises questions with regard to how an architecture may support different devices and applications simultaneously, and additionally be extensible to accommodate applications and devices not available at the time of the framework’s development. In this paper, these questions are explored, and an IoT infrastructure which accommodates the interoperability communication constraints and challenges today is proposed."

A Web-based Portal for Assessing Citizen Wellbeing ... "Although they've made advancements in sensor technologies, current smart city systems make little attempt to collect subjective data from humans. This can be overcome by exploiting the concept of humans as sensors, letting people become integral to the decision-making chain by providing their thoughts, feelings, and general feedback on how they interact with their city and, more importantly, how city services affect their lives. A significant challenge of operating applications in smart cities, however, is achieving interoperability and easily accessible application support without modifying the hardware and software already available, which slows the development and deployment process. In designing a solution to support the objectives described, the authors thus harnessed open source technologies, with the expectation that these are readily usable with systems in existence and are therefore easy to integrate into today's smart city technology fabric."

Using IT to Monitor Well-being and City Experiences ... "Gibson first proposed the term smart city in 1992 to broadly define how urban development was turning toward technology at that time and the subsequent innovation and globalization possible as a result. Since then, the term smart city has evolved, and Zanella (2014) uses it to describe how a city will " equipped with microcontrollers, transceivers for digital communication, and suitable protocol stacks that will make them able to communicate with one another and with the users." These sensors are used around cities to record a wide range of data, including information on utilities (e.g., smart grids, street lighting, water and electricity consumption), the environment (e.g., temperature, humidity, pollution), and traffic management (e.g., parking space usage, traffic monitoring and modeling). This data is analyzed by city authorities, utility companies, and businesses to enable maintenance and investment in infrastructure and services throughout the city."

The Cloud Afterlife: Managing your Digital Legacy ... "As technology has evolved, so too has the way which we store information: Simple items like photographs which, in the past we could have flicked through in a printed album, are now often only stored online. If they are not accessible online, they will therefore not be accessible at all once we are no longer around to locate them. This may have a psychological impact on the people we leave behind. In addition to the ethical concern, management of assets in the cloud is also a resource management challenge from the sustainability and environmental perspectives: As redundant data increasingly consumes resources, network sustainability becomes compromised. There is therefore an opportunity to optimize the process for the ethical, environmental, and sustainability implications of doing so. To determine the extent to which a problem exists both now and potentially in the future, we have conducted a survey to capture perceptions on cloud footprints in general, and the importance which people place on recovering digital assets from the cloud prior to death. Our results confirm that online users are generally unaware that this is an aspect which they should be considering in their estate planning - only 29% of respondents have considered what will happen their online data after death - but the majority agree that it is important and indicate that they will give it greater attention in the future."

Profiling User Behaviour for Efficient and Resilient Cloud Management ... "User behaviour profiling can be used within network management schemes to indicate the capabilities required from a cloud management proxy in terms of the way and rate at which it should be aware of the real-time network state and the resources which require provision. A management proxy needs, for example, ability to monitor change in the most popular webpages associated with a website or files associated with an application so that this detail may influence the caching strategy for optimised performance and operation. When resources are provisioned dynamically across a cloud, this will accommodate efficiency and security objectives, and also take into account the ways in which users are demanding services to optimise the opportunities that requirements are met. Many online companies now operate in this way and analyse customer behaviour to improve services by meeting predictable requirements and manipulating unpredictable behaviour. It is therefore to this gap we respond. This model is developed around behaviour profiles of user access and activities associated with the Wireless Sensor Knowledge Archive (Wisekar) website hosted in the Indian Institute of Technology in Delhi, India. Trends in user access and activities with the website are identified. Trends in user access and activities with the website are identified. In response, a cloud management framework is proposed. A user satisfaction metric based on visit duration and return visits controls cloud operation to improve the user experience and optimise management efficiency. A Certainty Factor quantifies the confidence with which management is applied such that actions enforced accommodate both predictable and unpredictable user behaviour."

Application Resource and Activity Footprinting to Influence Management in Next Generation Clouds ... "Standardised management solutions are an objective for the next generation of cloud to autonomically provision and configure resources in a manner generically applicable across platforms and applications. Cloud interoperability, a consequence of standardised operation, is desired in spite of the fact that platforms have variable management requirements and applications have various resource demands. In this paper, the footprint of an application developed at the Indian Institute of Technology in Delhi is explored, alongside consideration of its scalability as increasing volumes of requests are supported. The limiting network resource in this deployment is bandwidth availability at the server, which restricts the extent to which memory and CPU resources can be consumed, regardless of the number of application requests sent to the server. Definition of resource consumption relationships between attributes while servicing application requests leads to recommendations on server and network loading in a manner which optimises the overall balance of resource utilisation across all. This results in an average footprint across CPU, memory and network of 0.85 (max. of 1)."

The Standardisation of Cloud Computing: Trends in the State-of-the-Art and Management Issues for the Next Generation of Cloud ... "Roll-out of future cloud systems will be influenced by regulations from the standardisation bodies, if made available across the community. Trends in cloud deployment, operation and management to date have not been guided by any regulatory standards, and resources have been deployed in an ad hoc manner as demanded according to the business objectives of service providers. This is the least costly and most quickly revenue-returning business model. It is not however, the most cost-effective approach on a long-term basis: As a consequence of this roll-out model to date, the interoperability of resources deployed across clouds managed by different operators is restricted through inability to allocate workload to them in a regulated and controllable manner. The absence of standardised approaches to cloud management is therefore beginning to be accommodated such that the cost and performance advantages of interoperable operation may be exploited. In this paper, we review the state-of-the-art in standards across the field and trends in their development. We present a model which defines the drivers for cloud interoperability and constraints which restrict the extent to which this may realistically occur in future scalable solutions. This is supplemented with discussion on future challenges foreseen with regard to cloud operation and the way in which standards require provision such that cloud interoperation may be accommodated."

Energy Aware Scheduling across 'Green' Cloud Data Centres ... "Data centre energy costs are reduced when virtualisation is used as opposed to physical resource deployment to a volume sufficient to accommodate all application requests. Nonetheless, regardless of the resource provisioning approach, opportunities remain in the way in which they are made available and workload is scheduled. Cost incurred at a server is a function of its hardware characteristics. The objective of our approach is therefore to pack workload into servers, selected as a function of their cost to operate, to achieve (or as close to) the maximum recommended utilisation in a cost-efficient manner, avoiding instances where devices are under-utilised and management cost is incurred inefficiently. This is based on queuing theory principles and the relationship between packet arrival rate, service rate and response time, and recognises a similar exponential relationship between power cost and server utilisation to drive its intelligent selection for improved efficiency. There is a subsequent opportunity to power redundant devices off to exploit power savings through avoiding their management."

Cloud Services in Mobile Environments - The IU-ATC UK-India Mobile Cloud Proxy Function ... "Mobile networks currently play a key role in the evolution of the Internet due to exponential increase in demand for Internet-enabled mobile devices and applications. This has led to various demands to re-think basic designs of the current Internet architecture, investigating new and innovative ways in which key functionalities such as end-to-end connectivity, mobility, security, cloud services and future requirements can be added to its foundational core design. In this paper, we investigate, propose and design a functional element, known as the mobile cloud proxy, that enables the seamless integration and extension of core cloud services on the public Internet into mobile networks. The mobile cloud proxy function addresses current limitations in the deployment of cloud services in mobile networks tackling limitations such as dynamic resource allocation, transport protocols, application caching and security. This is achieved by leveraging advances in software-defined radios (SDRs) and networks (SDNs) to dynamically interface key functions within the mobile and Internet domains. We also present some early benchmarking results that feed into the development of the mobile cloud proxy to enable efficient use of resources for cloud based services such as social TV and crop imaging in mobile environments. The benchmarking experiments were carried out within the IU-ATC India-UK research project over a live international testbed which spans across a number of universities in the UK and India."

Performance Evaluation of Green Data Centre Management supporting Sustainable Growth of the Internet of Things ... "Network management is increasingly being customised for green objectives due to roll out of mission-critical applications across the Internet of Things and execution, in a number of cases, on battery-constrained devices. In addition, the volume of operations across the Internet of Things is attracting climate change concerns. While operational efficiency of wireless devices and in data centres (which support operation of the Internet of Things) should not be achieved at the expense of Quality of Service, optimisation opportunities should be exploited and inefficient resource use minimised. Green networking approaches however, are not yet standardised, and there is scope for novel middleware architectures. In this paper, we explore operational efficiency from the perspective of activities in data centres which support the Internet of Things. This includes evaluation of the effectiveness of mechanisms integrated into the e-CAB framework, an algorithm proposed by the authors to manage next generation data centres with green objectives. A selection of its policy mechanisms have been implemented in the NS-2 Network Simulator to evaluate performance; configuration decisions are described in this paper and presented alongside experimental results which demonstrate optimisations achieved. Focus lies, in particular, on rate adaptation of its context discovery protocol which is responsible for capturing real-time network state. Performance results reveal a small overhead when applying network management and validate improved efficiency through adaption in response to environment dynamics."

An Energy Aware Network Management Approach using Server Profiling in 'Green' Clouds ... "Clouds and data centres are significant consumers of power. There are however, opportunities for optimising carbon cost here as resource redundancy is provisioned extensively. Data centre resources, and subsequently clouds which support them, are traditionally organised into tiers; switch-off activity when managing redundant resources therefore occurs in an approach which exploits cost advantages associated with closing down entire network portions. We suggest however, an alternative approach to optimise cloud operation while maintaining application QoS: Simulation experiments identify that network operation can be optimised by selecting servers which process traffic at a rate that more closely matches the packet arrival rate, and resources which provision excessive capacity additional to that required may be powered off for improved efficiency. This recognises that there is a point in server speed at which performance is optimised, and operation which is greater than or less than this rate will not achieve optimisation. A series of policies have been defined in this work for integration into cloud management procedures; performance results from their implementation and evaluation in simulation show improved efficiency by selecting servers based on these relationships."

Context-Aware Characterisation of Energy Consumption in Data Centres ... "Carbon emissions are receiving increased attention and scrutiny in all walks of life and the ICT sector is no exception. With the increase in on-demand applications and services together with on-demand compute/storage facilities in server farms or data centres there are self-evident increases in the power requirements to maintain such systems. Proponents of the impact of increased carbon emissions when powering electrical systems in general however, regularly impress negative side-effects such as influence on climate change. Action is subsequently being encouraged to halt further environmental damage. The problem is explored in this paper from the point of view of carbon emissions from data centre operations and the development of energy-aware management and energy-efficient networking solutions. Data centre energy consumption costs drive the evaluation process within a Data Centre Energy-Efficient Context-Aware Broker (DCe-CAB) algorithm designed as an original solution to this significant carbon-contributing network scenario. In this paper, performance requirements and objectives of the DCe-CAB are defined, along with case study demonstration of the way in which it optimises selection and operation of data centres using context-awareness."

Towards the Simulation of Energy-Efficient Resilience Management ... "Energy-awareness and resilience are becoming increasingly important in network research. So far, they have been mainly considered independently from each other, but it has become clear that there are important interdependencies. Resilience should be achieved in a manner which is energy-efficient, and energy-efficiency objectives should respect the networks' need to be prepared to observe and react against disruptiveactivity. Meeting these complementary and sometimes conflicting research objectives demands novel strategies to support energy-efficient resilience management. However,the effective evaluation of cross-cutting energy and resilience management aspects is difficult to achieve using the tool support currently available. In this paper, we explore a range of network simulation environments and assess their ability to meet our energy and resilience modelling objectives as a function of their technical capabilities. Furthermore, ways in which these tools can be extended based on previous related implementations are also considered."

Autonomic Context-Aware Management in Interplanetary Communications Systems ... "Maintaining connectivity in deep-space communications is of critical importance to key missions and the ability to adapt node behavior “on-the-fly” can have dynamic benefits. Autonomic operation minimizes failure risk by performing local configurations using collected context data and on-board policies, improving response time to events, and reducing remote mission management expense. Herein, we evaluate cost-benefit impacts when a context-aware brokering algorithm developed to achieve autonomy is applied to interplanetary communications systems."

Energy-Aware Data Centre Management ... "Cloud computing is one way in which communications within and between data centres can be optimised by using resources which are physically close to the client, are exposed to lower electricity costs, contribute a smaller carbon footprint or have residual resources sufficient to fulfil Quality of Service requirements. Optimisation of activity involving data centres is a next generation network management objective due to continued growth in the number of plants and volume of operations within, factors which contribute to environmental concerns associated with energy consumption and carbon emissions from data centre facilities when renewable energy resources are not used. In this paper, we present an algorithmic mechanism developed to automate selection of a data centre in response to application requests, the Data Centre (DC) Energy-Efficient Context-Aware Broker (e-CAB). Through integration of the DCe-CAB in a case study scenario, operational improvement through reduction of carbon emission and balancing of other performance-related attributes including delay and financial cost is achieved, validating the DCe-CAB's positive impact."

"Operational Performance of the Context-Aware Broker (CAB): A Communication and Management System for Delay-Tolerant Networks (DTNs) ..." "The Context-Aware Broker is a policy-based management system developed by the authors to achieve autonomic communication in delay-tolerant networks. This is in recognition of environment challenges when operating in remote regions, and time, human, and financial resource costs incurred during mission-specific configuration. The Context-Aware Broker seeks to limit cost overheads through achieving a standardised transmitting approach, and operating autonomically to optimise reliability and sustainability levels achieved. In achieving its network management function, a cost-benefit impact is the consequence. Performance results from the Context-Aware Broker's deployment in ns-2.30 are presented and evaluated in this paper."

A Context-Aware Policy-Based Framework for Self-Management in Delay-Tolerant Networks (A Case Study for Deep Space Exploration) ... "Policy-based management allows the deployment of networks offering quality services in environments beyond the reach of real-time human control. A policy-based protocol stack middleware, the context-aware broker, has been developed by the authors to autonomically manage the remote deep space network. In this article example policy rules demonstrate the concept, and prototype results from ns-2.30 show the overall positive cost-benefit impact in an example scenario."

TCP's Protocol Radius: the Distance where Timers Prevent Communication ... "We examine how the design of the Transmission Control Protocol (TCP) implicitly presumes a limited range of path delays and distances between communicating endpoints. We show that TCP is less suited to larger delays due to the interaction of various timers present in TCP implementations that limit performance and, eventually, the ability to communicate at all as distances increase. The resulting performance and protocol radius metrics that we establish by simulation indicate how the TCP protocol performs with increasing distance radius between two communicating nodes, and show the boundaries where the protocol undergoes visible performance changes. This allows us to assess the suitability of TCP for long-delay communication, including for deep-space links."

A Reconfigurable Context-Aware Protocol Stack for Interplanetary Communication ... "This paper presents an approach to improve transmission success in delay-tolerant networks. The context- aware broker (CAB) grants networking autonomy when communicating in challenging environments, which suffer from conditions which are variable and exceed the limits for which terrestrial protocols were designed. Such environments currently require human intervention and the manual configuration of each communication - a seemingly simple decision of when to transmit becomes an issue in deep space due to planet movement. However, manual configuration is becoming unrealistic, given the scale on which communications occur. CAB automates the process by making intelligent decisions before transmission begins, and reconfigures as it progresses. It recognises the dynamic environments through which a transmission may pass and matches protocol capabilities with environmental constraints."

GPSDTN: Predictive-Velocity-Enabled Delay-Tolerant Networks for Arctic Research and Sustainability ... "A Delay-Tolerant Network (DTN) is a necessity for communication nodes that may need to wait for long periods to form networks. The IETF Delay Tolerant Network Research Group is developing protocols to enable such networks for a broad variety of Earth and interplanetary applications. The Arctic would benefit from a predictive velocity-enabled version of DTN that would facilitate communications between sparse, ephemeral, often mobile and extremely power-limited nodes. We propose to augment DTN with power-aware, buffer-aware location- and time-based predictive routing for ad-hoc meshes to create networks that are inherently location and time (velocity) aware at the network level to support climate research, emergency services and rural education in the Arctic. On Earth, the primary source of location and universal time information for networks is the Global Positioning System (GPS). We refer to this Arctic velocity-enabled Delay-Tolerant Network protocol as "GPSDTN" accordingly. This paper describes our requirements analysis and general implementation strategy for GPSDTN to support Arctic research and sustainability efforts."

Bringing IPTV to the Market through Differentiated Service Provisioning ... "The world of telecommunications continues to provide radical technologies. Offering the benefits of a superior television experience at reduced long-term costs, IPTV isthe newest offering. Deployments, however, are slow to be rolled out; the hardwareand software support necessary is not uniformly available.This paper examines the challenges in providing IPTV services and the limitations indevelopments to overcome these challenges. Subsequently, a proposal is made whichattempts to help solve the challenge of fulfilling real-time multimedia transmissionsthrough provisioning for differentiated services. Initial implementations in Opnet aredocumented, and the paper concludes with an outline of future work."

Improving the Performance of Asynchronous Communication in Long-Distance Delay-Sensitive Networks through the Development of Context-Aware Architectures ... "Context-awareness is inherent in anticipated interplanetary missions. Swarm technologies by D'Arrigo, P, Santandrea, S. (2005) use context-awareness in short-haul networks between components, and long-distance networks allow communication with Earth. However, the propagation delays limit real-time communications, deep space being an environment in which the speed of light becomes a restriction. Therefore, the development of a protocol stack which is adaptive to application requirements and external influences will help to maximise communication synchronicity. As part of a first year doctorate research programme, this paper correlates current stack functionalities with interplanetary application requirements. A redesigned stack proposes to resolve this misalignment. Context-awareness is incorporated, enabling intelligent protocol selection using application layer knowledge and environmental information, with particular attention given to transport protocols. The paper concludes by considering transport protocol characteristics when deployed beside a context-aware layer, with the long-term aim being the development of a transport protocol suitable for deployment in the state-of-the-art context-aware stack."

Showcasing my Students' Work:

Immortal Bits: Managing Our Digital Legacies ... "An Ulster University student designed a website to manage and deliver digital assets after death.

There comes a time in our lives when we think about "getting our affairs in order" in anticipation of our inevitable demise. This might entail gathering important documents in a secure place and coordinating access for family or friends. But now that many of our assets are digital - photos, videos, documents, bank accounts - how do we arrange secure access for our heirs? While a master's degree student at Ulster University, Mark Hetherington designed the My Digital Legacy Web service to meet both client and beneficiary needs."

Published Paper Reviews

Discovery in the Internet of Things: the Internet of Things ... "The challenge of making sense of data collected in the Internet of Things (IoT), such that the “needle” can be found in the digital haystack, is the focus of this work. This is a significant area of research in the next phase of IoT development, to allow the IoT potential to be more fully achieved; the envisaged IoT is not currently being exploited due to limited hardware and software developments. This results in challenges related to the collection of data from smart cities and its organization, recognition, and use. Currently, these operations do not take place in a standardized way, resulting in ad hoc device- and application-specific deployments. Furthermore, as the IoT continues to evolve, the achievement of a cohesive, interoperable, and global system becomes increasingly unlikely."

Key Challenges for the Smart City: Turning Ambition into Reality ... "Even if the initiatives are sometimes uncoordinated, they bring the city each time a step closer to becom[ing] a true smart city.” While they do not define the context in which they consider a “true smart city” to exist, the authors capture the current roll-out of experimental technology, and the restricted existence of a “true smart city” where technologies are standardized, interoperable, and able to be easily integrated ..."

Growing Closer on Facebook: Changes in Tie Strength through SOcial Network Site Use ... "Relationships, measured in the strength of a tie between people, can be characterized according to interactions, and are dependent on what, why, and how we communicate, and the frequency of communication activities. Given the rise in new technologies, and dependencies on these to support day-to-day life, communications are changing in each aspect; we can therefore assume that our relationships are similarly changing ..."

A New Virtual Network Static Embedding Strategy within the Cloud's Private Backbone Network ... "Cloud computing is described, in this work, as overcoming the main issues of the computational world. It may be more accurately considered as compensating for resource availability, provisioning, and allocation decisions, while simultaneously introducing security and management cost impacts ..."

Hierarchical Virtual Machine Consolidation in a Cloud Computing System ... "As a timely contribution to energy efficiency challenges across data centers and clouds, the authors propose a virtual machine (VM) consolidation approach to limit the number of physical machines provisioned and active. By dealing with the problem of optimized component provision, ..."

Characterizing Hypervisor Vulnerabilities in Cloud Computing Servers ... "Security, in every application of the concept, is a constantly moving target: once defenders identify and patch a vulnerability, attackers move on to the next weak spot. Efforts are therefore required to track the path of exposures through ..."

A Study on Virtual Machine Deployment for Application Outsourcing in Mobile Cloud Computing ... "Cloud architectures support data center operations through more optimized responses to application requests. Performance is improved by placing virtual resources closer to application users, with capacity, which fulfils quality of service (QoS) ..."

A Survey of Context Data Distribution for Mobile Ubiquitous Systems ... "Effective context awareness is pertinent across networks today. The fact that a standardized solution has not yet been established is testament to the ongoing evolution and volatility of networks and the technologies involved ..."
page last updated: 23rd July, 2020