return to:

cathrynpeoples: Paper Abstracts

Peer-reviewed Chapters (please contact me for a copy -

The Design of Ethical Service Level Agreements to Protect Cyber Attackers and Attackees ...

The Design of Ethical Service Level Agreements to Protect Cyber Attackers and Attackees ... Cybercrime has a different ethos from other forms of crime. The challenge of anonymity, and the subsequent difficulty of attribution, makes cybercrime an almost impossible situation to protect against. Furthermore, evidence indicates that a number of the major notable cyberattacks have been carried out by beings who may be described as being irrational – cybercriminals have been identified as having autism and Asperger’s syndrome. The routine activity theory (RAT) posits the theory that crime is more prevalent when an attacker is in close proximity to a target and a reliable guardian is absent. However, in a number of cybercrime scenarios, guardians have been present, and the attackers were far away from the attack points, yet their crimes were able to occur. Online services, in general, are purchased through a service-level agreement (SLA). A SLA specifies the way which a network service will be provided, with quality being measured from the perspective of the person paying for it. We can argue that the RAT is supported through the provision of a SLA, in that the ability to accept a network service in a home is dependent on the presence of a guardian. If a member of a household has access to a network connection, it might therefore be assumed that a guardian will be present and able to act in the role of preventing crime. Furthermore, service providers can play some role in facilitating protection through educating parents in possible relevant protective mechanisms to apply through the use of parental controls. However, we recognise that, despite offering capabilities, all customers do not necessarily understand the significance of the configuration options or, indeed, what the residents need in response to network activities ongoing in their background. We therefore argue that there is value to be gained from automating the service protection provided. Automating the capabilities can provide additional protection through removing a layer which requires user interaction; in the context of the SLA, this involves knowing what the household needs to be protected against and asking the correct questions to ensure that it is put in place. We consider this to support the provision of ‘ethical SLAs’, through knowing how to respond in a manner adequate for the needs of those who can benefit from it within a household. It is therefore to this concept that our research proposal in this chapter is made.

Customisable Service Level Agreement (SLA) Generator Platform using FCAPS Management to Enhance Quality of Experience (QoE) on Internet of Things (IoT) ... Defining an ontology to support all possible scenarios in the Internet of Things (IoT) is challenging, given the range of IoT applications, the context in which they are used, and their continued evolution. Furthermore, a desire for flexibility and ability to accommodate all scenarios using a standardised approach renders this a complex operational and management environment. An approach taken by some researchers in defining ontologies is to integrate components from different ontological schemes into a single schema. This approach is rationalised through the subsequent ability to support interoperability across IoT deployments. It also allows niche areas in individual ontologies to be merged to produce a more encompassing approach. However, this continues to be a pieced-together strategy, and depends on Service Provider intelligence to ensure all applications are fulfilled: SLAs provisioned on this basis therefore continue to require manual intervention. In response to this gap, we have proposed an ontology for the IoT. The ontology is unique in that it incorporates detail associated with customers and their preferences, such as their ability to tolerate the observed dataset becoming unavailable or the data collection frequency of a dataset changing. Furthermore, it supports ability to accommodate domain-specific angles when working on a cross-domain approach; we believe that this is key in overcoming the limitations in other IoT ontologies and works towards a single standardised ontology. Through accommodating domain-specific elements, it is our objective that Quality of Service (QoS) needs of the applications are fulfilled without a customer needing significant technical knowledge. Taking these unique aspects into account, we believe the ontology will facilitate automatic SLA provision, service setup, and service management throughout the SLA lifetime. This will help to support the business objectives of the Internet Service Provider. The ontology has been defined in our previous work. In this document, the use of these ontology terms to generate personalised SLAs is presented.

The New Normal: Cybersecurity and Associated Drivers for a Post-Covid-19 Cloud ... The cultural and technological impacts of the COVID-19 pandemic will be long lasting, and one of the major impacts has been the increased uptake in cloud usage. Even if organizations have not been able to make the move to cloud so far, it is part of the near-term business goals for many. In the social context, the cloud has a significant effect on human interactions; we are also saving more and more of our personal (and confidential) data on the cloud. During the first global lockdown in March 2020, the uptake in social media increased dramatically. It is predicted that the ‘new normal’ will involve a continued use of social media sites such as Facebook and Instagram as well as conferencing tools such as Zoom and Microsoft Teams. In fact, the cloud is seen to be more crucial to everyday life now more than ever before. While good from a revenue-generating perspective for cloud operators and service providers, this is a challenging situation to manage. The post-COVID-19 cloud represents a nexus of critical drivers including cyber-security, reliability, efficiency and cost that could transform the way the cloud and its associated technologies operate. In this chapter, we therefore examine the inter-relationships between these qualities, giving specific attention to the achievement of privacy and security. A proposal is made to extend the original CIA triad, which defines the priorities that should be given when integrating security into an organization, with a focus on the achievement of confidentiality, integrity and availability; with the achievement of these, we argue that there is an overall focus on reliability. The proposed extension, which forms eCIA, advocates that network reliability be achieved in parallel with efficiency. When considered together, in a planned approach that is not applied as a bolt-on reaction to a breach, there is a careful balance to reach in the parallel achievement of all objectives. In this chapter, we consider potential ways in which the competing goals may be facilitated simultaneously.

Managing Cybersecurity Events using Service Level Agreements (SLAs) by Profiling the People who Attack ... Security frameworks are used to determine the approach to managing a network which may be under attack. The DREAD model from Microsoft for example, promotes a strategy which is defined according to the impact of the attack on Damage, Reproducibility, Exploitability, Affected users, and Discoverability (DREAD). Each DREAD metric is scored, and the subsequent priorities are used to influence the reaction to the attack. In the event that an identified attack is being carried out by a security auditor, otherwise known as a white hat hacker whose intention is not malicious, the attack may not contribute significant Damage when considered according to DREAD, yet may be consuming resources and causing challenges for the network service provider in terms of their ability to fulfil all customer Service Level Agreements (SLAs). This is therefore an operational event which needs to be responded to when managing the network load, yet not necessarily from a cybersecurity perspective – it could, however, be managed from either perspective, of performance or security. As an element of a Fault, Configuration, Accounting, Performance and Security (FCAPS) management approach, a response to such an event may involve reacting to a potential performance compromise occurring for security reasons. The network operator or service provider does not need to know the reason why the network is heavily loaded, and only needs to ensure sufficient resources to fulfil all SLAs. However, we recognise that there is an opportunity to pre-emptively identify that the network may become loaded in portions due to the tendencies of people operating within the network, specifically from a cybersecurity perspective and in relation to their intentions. This is in recognition of the fact that people who attack networks have a propensity towards commonalities in their personal characteristics, and that these factors can be the drivers behind their attacking of a network. In addition to categorizing attackers according to their intention in terms of being malicious, or black hat, grey or friendly, white hat, we propose a further degree of categorisation in terms of those who: (1) have some personal pressure which is influencing their desire to carry out malevolent actions online, (2) are naturally highly intelligent and inquisitive, and (3) those who are mentally ill. In this chapter, an approach is proposed to manage the network by profiling the characteristics of users residing across it according to their propensity to carry out a cyber-attack. Furthermore, we suggest using this information to pre-empt their activity such that the SLAs for all customers will continue to be achieved throughout the SLA lifetime. This process will be facilitated through the way in which the SLAs are defined and the information collected during the service setup procedure.

A Multi-level Ontology to Manage Service Level Agreements in Smart Cities ... Internet of Things (IoT) services, to date, have been managed through af-fordances made by service providers (SP) to data provider (DP) customers who supply data for hosting in a shared repository. Services provided to data consumers (DC), on the other hand, are not managed in a similar way, with DCs being able to access datasets without providing detail to track them. Typ-ically, DCs are not paying customers, and subsequently receive a best-effort Quality of Service (QoS) – thus they are vulnerable in the current system to change in data availability. To promote continued growth of the IoT, it is an-ticipated that changes are required to the business model. This may result in greater levels of protection for DC customers and more guaranteeable levels of service. In this chapter we present an ontology which responds to the chal-lenge of managing customer information and providing a service autonomous-ly in response. An application of the ontology is contextualized using the smart city waste management domain.

Priority Activities for Smart Cities and the Infrastructure ...

A Standardizable Network Architecture Supporting Interoperability in the Smart City Internet of Things ...

Green Networks and Communications ...

Peer-reviewed Papers:

Pirbhai, N. F. & Peoples, C. (2022) "Recommendations of the Ethical Issues to Accommodate when Digitalizing the Big Data in the Field of Arts & Humanities", Proceedings of the IEEE International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), pp. 1-6. ... The fields of information science and digital humanities are closely linked, with the overlapping of common research themes noted in many research papers. National libraries and archives have been mass collecting and digitizing cultural and scholarly productions for a long time. However, with the data accessibility offered by web and search engines, questions surrounding ethical issues and transparency behind the actual practices of big data/mass digitization, such as collecting and using data, arise. It is important to find out whether, under the General Data Protection Regulation (GDPR) as one example, human rights are being respected. Since digital libraries, as a field of research, are focusing on certain fundamental concerns in the growing cyberculture, this qualitative research through a literature review analyzes the ethical implications of big data and digitalization. In fact, even though mass digitalization is beneficial, there is a need for more control on how data is stored and disseminated. This study will therefore focus on the insights and suggestions of existing authors to understand the different loopholes that must be consideration when embarking in a digitalization project. Google LLC is used as a case study to contextualize the ethical issues. The study revealed that the process of digitalization is complex as there are many unknowns and ethical standards. Furthermore, security issues are not often a priority during the digitalization process. Recommendations of further procedures around the digitalization process will therefore create a more human-centered, ethical, computing environment

C. Peoples, A. Moore, N. Georgalas (2022) "Customer Classification Recommender to Support Personalised Service Level Agreements Across the Internet of Things", IEEE WF-IoT, November 2022. ...

C. Peoples, A. Moore, N. Georgalas (2022) "Service Level Agreement (SLA) Chains Supported by Cloud in a Complex Port Ecosystem with Competing Stakeholder Goals", EAI Endorsed Transactions on Cloud Systems, DOI: 10.4108/eai.26-7-2022.174394. ... Ports play a crucial role in the global economy and in facilitating international trade. However, a port exists within a complex ecosystem and there are challenges in managing operations here for any single objective, with primary goals including operational performance, cost, sustainability, safety, and even satisfaction. Each goal can be aligned with a stakeholder group, and all need to be managed in parallel for an overall effective, satisfying, and efficient port. However, opportunities in the chain of activities at ports which lead to these goals being achieved are not being protected or exploited, and until schemes are put in place to do so, a challenging working and living environment is allowed to persist in and around the port. This is having a damaging impact on both employee motivation and local resident satisfaction [1] [2]. Furthermore, due to the dependencies between each stakeholder, a negative experience for one can lead to a consequential negative reaction for others, and the effectiveness and efficiency of the entire ecosystem begins to decline. From a port management perspective, there is therefore a need to manage port activities without unnecessary delay due to the ripple effect and subsequent reactions on all stakeholders in the ecosystem. The aim of this article is to consider the complexity of the port environment from the perspective of stakeholders, with a view to recognising the ways that their needs can be targeted in parallel using cloud-driven Service Level Agreements (SLAs).

C. Peoples, A. Moore, & N. Georgalas, "Port Sustainability as a Service", Frontiers Sustainability SI on Meeting of Giants: How can Technology and Consumption Team up for Sustainability, June 2022. ... The maritime industry is a complex ecosystem which is important to manage carefully given its handling of global trade. Effective operation at a port is dependent on a timely passage of goods, involving multiple competing objectives, one of which is sustainability. Unsurprisingly, given the extent of a port’s operations, it is a significant contributor of emissions. A port is a physically demanding industry in which to work, and any degradation in workforce productivity can have a detrimental effect on the port’s effective running. Slow operations, combined with dependencies between a port’s stakeholders, can further amplify unsustainability. There are some efforts to explore the digitalization of ports, including the creation of smart ports. However, there is widespread resistance to the introduction of technology in this domain. There are therefore a number of areas in which to make technical contributions to improve the efficiency of port operations. In this paper, we propose using the satisfaction of staff at a port to assess the efficiency of its operations. This will be possible through the roll-out of sensors supporting an Internet of Things (IoT) architecture across the port, with the intention of improving operational efficiencies. With operational latencies as expected, we argue that staff will be satisfied, however, once delays become more unpredictable and unexpected, staff satisfaction will decline. It is therefore through increased IoT use that port sustainability will be supported. To enable this, staff satisfaction can be monitored and managed using Service Level Agreements (SLAs). When staff are satisfied, the port will be operated and sustain low costs. When staff satisfaction begins to decline, however, operation will become more focused on the performance of the port, with a view to improving it through identifying where bottlenecks exist from the perspective of inefficient operations and subsequently, staff output. While simultaneously managing both cost and performance through the satisfaction of staff, the goal is an overall positive contribution to a port’s efficiency and sustainability.

C. Peoples, Z. Tariq, A. Moore, M. Zoualfaghari & A. Reeves, "Using Process Mining to Formalise Service Level Agreement (SLA) Allocation," 2021 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/IOP/SCI), 2021, pp. 671-676, doi: 10.1109/SWC50871.2021.00100. ... Service Level Agreement (SLA) assignment for online services is typically an unstandardized process, and is generally executed in an ad hoc manner to respond to customer service requirements. The steps taken by organizations to assign the SLA to a customer can also be influenced by the technical knowledge of a customer and their ability to explain the service requirement. Also, there is a need to analyze the organization’s SLA assignment for any underlying discrepancies in execution of such complex process. In this paper, we analyze our earlier proposed SLA assignment process using process mining techniques. We validated the suitability of our proposed approach for wider scenarios of customer requirements, including for the customers having no technical knowledge about their required services. We also presented both customer’s and system’s perspective of the SLA assignment process using process discovery techniques. Our proposed SLA assignment process verifies the relevance of the SLA assignment activities such as optimized customer interaction, score assignment, and subsequent customer service allocation.

C. Peoples, P. Kulkarni, K. Rabbani, A. Moore, M. Zoualfaghari, I. Ullah, "A Smart City Economy Supported by Service Level Agreements: A Conceptual Study into the Waste Management Domain," MDPI Smart Cities, June 2021, DOI: 10.3390/smartcities4030049. ... The full potential of smart cities is not yet realized, and opportunities continue to exist in relation to the business models which govern service provision in cities. In saying this, we make reference to the waste services made available by councils across cities in the United Kingdom (UK). In the UK, smart waste management (SWM) continues to exist as a service trialed across designated cities, and schemes are not yet universally deployed. This therefore exists as a business model which might be improved so that wider roll-out and uptake may be encouraged. In this paper, we present a proposal of how to revise SWM services through integrating the Internet service provider (ISP) into the relationship alongside home and business customers and the city council. The goal of this model is to give customers the opportunity for a more dynamic and flexible service. Furthermore, it will introduce benefits for all parties, in the sense of more satisfied home and business owners, ISPs with a larger customer base and greater profits, and city councils with optimized expenses. We propose that this is achieved using personalized and flexible SLAs. A proof of concept model is presented in this paper, through which we demonstrate that the cost to customers can be optimized when they interact with the SWM scheme in the recommended ways.

A. McCurdy, C. Peoples, A. Moore, M. Zoualfaghari, "Waste Management in Smart Cities: A Survey on Public Perception and the Implications for Service Level Agreements," EAI Endorsed Transactions on Smart Cities, May 2021, DOI: 10.4108/eai.27-5-2021.170007. ... INTRODUCTION: Waste management in cities has not advanced at the same rate as technology in general. Furthermore, there is little evidence that citizens are satisfied with services in smart cities. OBJECTIVES: The objective of this paper is therefore to capture citizen perspectives in relation to smart city services and, specifically, that of waste management. METHODS: An online survey was disseminated using Google Forms to twenty-five homeowners within the Tourism Ireland office in Coleraine, Northern Ireland. The objective was to gather the typical citizen perspective of smart cities, their views on the meaning of ‘smart waste management’, and any features which they would like to experience with regard to their waste collection process and/or schedule in a future smart city. RESULTS: It was found that a common perception of a smart city exists, it being one concerned with efficiency and recycling; fewer citizens are, however, familiar with the term ‘smart waste management’. Homeowners generally acknowledge that improvements to their current bin collection schedule are necessary. CONCLUSION: The paper concludes with a discussion of the ways in which citizens believe that a bin collection schedule which they are in control of would be an improvement on a council-defined one. We correlate this with extensions necessary to service provisioning processes, and Service Level Agreements (SLAs), to support future smart city services.

Creative Assessment Design on a Master of Science Degree in Professional Software Development ... A MSc conversion degree is one which retrains students in a new subject area. This type of programme opens new opportunities to students beyond those gained through their originally chosen degree. Students entering a conversion degree do so, in a number of cases, to improve career options, which might mean moving from an initially chosen path to gain skills in a field that they now consider to be more attractive. With a core goal of improving future employability prospects, specific requirements are therefore placed on the learning outcomes achieved from the course content and delivery. In this paper, the learning outcomes are focused on the transferable skills intended to be gained as a result of the assessment design, disseminated to a cohort of students on a Master of Science (MSc) degree in Professional Software Development at Ulster University, United Kingdom. Coursework submissions are explored to demonstrate how module learning has been applied, creatively.

Research-based Education on a Master of Science Degree in Professional Software Development ... This article contributes to the narrowly-investigated field of research-based assessment. Research-based assessment supports student learning by offering choice in how it takes place. It is not widely offered, however, for reasons which can include the challenge of marking outputs consistently, and the importance of ensuring that students engage early with the task. The approach presented in this article exploits this technique, and additionally merges it with authentic assessment, where students are involved in the assessment design. The study confirms effectiveness of the approach through the mark profile, in spite of all students not engaging with it at the earliest opportunity. The study also identifies how students became more competent with research-based assessment when reflecting on feedback for a similar piece previously assessed.

A Review of IoT Service Provision to Assess the Potential for System Interoperability in an Uncertain Ecosystem ... A sprawling and uncoordinated Internet of Things (IoT) environment has evolved in an uncontrolled manner where applications and infrastructure are made available as and when desired by operators, and, in many instances, without concern for others co-existing here. There is evidence already of IoT Service Providers changing their operational priorities and, as a result, the IoT may have an uncertain long-term future. This paper is driven by an assumption that an IoT environment with longevity will not be achieved using today's siloed systems; it is believed that these will not naturally mesh together without strategies in place to facilitate this. We therefore examine the reasons why and extent to which IoT technologies are generally not interoperable through the distinct ways that services are made available.

Building Stakeholder Trust in Internet of Things (IoT) Data Services using Information Service Level Agreements (SLAs) ... A Service Level Agreement (SLA) defines a contract between network service providers and consumers, specifying the terms of a service which providers will make available and the conditions which consumers will accept. To date, SLAs have been specified using basic terms, such as availability and network performance, with a consumer being compensated in the event that the service provided does not meet the terms agreed. Given changes in the ways which network services are now made available, however, SLA terms are changing to capture both the differences in service provision and, additionally, in the responsibilities of the parties involved. It is this aspect of information SLAs which we respond to in this work, and we propose a SLA model which accommodates the requirements of these new relationships. We also propose a set of metrics, a selection of which are presented in this paper to demonstrate our concept, and recommend that a selection can be adapted by consumers. Finally, due to the intricate relationships between data consumers and data providers in the IoT environment and the fact that metric adaptation may lead to SLA violation, we discuss SLA conflict resolution through prioritizing non-functional metrics on a per-customer basis.

The Development to an iOS Application for Gym Membership Management with Firebase Integration and Gamification Support ... It has become apparent that a number of health and fitness centers continue to use outdated, inefficient, and ineffective methods when it comes to the management of their members. The technologies and methods at their disposal are neither appropriate nor current, and they often use several distinct IT systems to manage gym memberships. With an inf lux of boutique gyms opening, offering individualized membership and fitness services at a fraction of the cost of regular gyms, it is paramount that bigger gyms overcome their software limitations to retain current members.

A Survey of the Ability of the Linux Operating System to Support Online Game Execution ... "Linux has suffered sluggish home user uptake due mainly to the dominance of rivals, and has seen numerous incarnations as a gaming platform fall flat. Gaming is a particularly sensitive application given its intensive bandwidth and system response requirements; these applications therefore place specific demands on the Operating System platform on which game play is supported. In this work, the ability of the Linux operating system to support execution of online games is explored through a survey of the state-of-the-art in this area. Given the recent increase in cloud-based online gaming, it can be concluded that the time is ripe for more widespread Linux uptake, especially in the gaming domain. This is particularly true today given the amount of exposure to Information Technology across society in general, and ongoing deployment of Internet of Things environments: Linux's open source, modular and freely customisable design may therefore not be as daunting as before, and the unique benefits of this platform may be exploited for the experiences it can bring to applications in general and, specific to the context of this work, players in their game play. This paper makes a unique contribution to the field: Although a number of articles are available within the general area of Linux and gameplay, a thorough survey on this issue has not been seen so far. This is therefore the gap to which this paper contributes."

The Design of a Gamification Algorithm in a Music Practice Application ... "Keeping track of pupils' progress across different instruments and lessons, and what they are meant to be practicing, can be challenging. The typical solution is to use a book in which teachers write notes and pupils record practice. This can, however, easily be lost or become illegible. Furthermore, music education and self-directed practice is one area of education which is not widely gamified, with gamification describing a technique that drives specific human behaviors, motivates users, and has proven success in influencing learning. An application could therefore be created to respond to these needs by recording and tracking music practice whilst also gamifying student learning. An algorithm which accommodates these requirements is presented in this paper."

A Standardizable Network Architecture Supporting Interoperability in the Smart City Internet of Things ... "An increase of 2.5 billion people is expected in urban areas by 2050, when 66% of the world population will reside here. It is therefore reasonable to assume a parallel growth in the smart city Internet of Things (IoT). A challenge, however, is presented in the interoperability between the devices deployed, limited due to the ad hoc and proprietary ways which systems have been rolled out to date. A standardized network infrastructure specific to the IoT can work towards resolving the challenges. This approach to operation, however, raises questions with regard to how an architecture may support different devices and applications simultaneously, and additionally be extensible to accommodate applications and devices not available at the time of the framework’s development. In this paper, these questions are explored, and an IoT infrastructure which accommodates the interoperability communication constraints and challenges today is proposed."

A Web-based Portal for Assessing Citizen Wellbeing ... "Although they've made advancements in sensor technologies, current smart city systems make little attempt to collect subjective data from humans. This can be overcome by exploiting the concept of humans as sensors, letting people become integral to the decision-making chain by providing their thoughts, feelings, and general feedback on how they interact with their city and, more importantly, how city services affect their lives. A significant challenge of operating applications in smart cities, however, is achieving interoperability and easily accessible application support without modifying the hardware and software already available, which slows the development and deployment process. In designing a solution to support the objectives described, the authors thus harnessed open source technologies, with the expectation that these are readily usable with systems in existence and are therefore easy to integrate into today's smart city technology fabric."

Using IT to Monitor Well-being and City Experiences ... "Gibson first proposed the term smart city in 1992 to broadly define how urban development was turning toward technology at that time and the subsequent innovation and globalization possible as a result. Since then, the term smart city has evolved, and Zanella (2014) uses it to describe how a city will " equipped with microcontrollers, transceivers for digital communication, and suitable protocol stacks that will make them able to communicate with one another and with the users." These sensors are used around cities to record a wide range of data, including information on utilities (e.g., smart grids, street lighting, water and electricity consumption), the environment (e.g., temperature, humidity, pollution), and traffic management (e.g., parking space usage, traffic monitoring and modeling). This data is analyzed by city authorities, utility companies, and businesses to enable maintenance and investment in infrastructure and services throughout the city."

The Cloud Afterlife: Managing your Digital Legacy ... "As technology has evolved, so too has the way which we store information: Simple items like photographs which, in the past we could have flicked through in a printed album, are now often only stored online. If they are not accessible online, they will therefore not be accessible at all once we are no longer around to locate them. This may have a psychological impact on the people we leave behind. In addition to the ethical concern, management of assets in the cloud is also a resource management challenge from the sustainability and environmental perspectives: As redundant data increasingly consumes resources, network sustainability becomes compromised. There is therefore an opportunity to optimize the process for the ethical, environmental, and sustainability implications of doing so. To determine the extent to which a problem exists both now and potentially in the future, we have conducted a survey to capture perceptions on cloud footprints in general, and the importance which people place on recovering digital assets from the cloud prior to death. Our results confirm that online users are generally unaware that this is an aspect which they should be considering in their estate planning - only 29% of respondents have considered what will happen their online data after death - but the majority agree that it is important and indicate that they will give it greater attention in the future."

Profiling User Behaviour for Efficient and Resilient Cloud Management ... "User behaviour profiling can be used within network management schemes to indicate the capabilities required from a cloud management proxy in terms of the way and rate at which it should be aware of the real-time network state and the resources which require provision. A management proxy needs, for example, ability to monitor change in the most popular webpages associated with a website or files associated with an application so that this detail may influence the caching strategy for optimised performance and operation. When resources are provisioned dynamically across a cloud, this will accommodate efficiency and security objectives, and also take into account the ways in which users are demanding services to optimise the opportunities that requirements are met. Many online companies now operate in this way and analyse customer behaviour to improve services by meeting predictable requirements and manipulating unpredictable behaviour. It is therefore to this gap we respond. This model is developed around behaviour profiles of user access and activities associated with the Wireless Sensor Knowledge Archive (Wisekar) website hosted in the Indian Institute of Technology in Delhi, India. Trends in user access and activities with the website are identified. Trends in user access and activities with the website are identified. In response, a cloud management framework is proposed. A user satisfaction metric based on visit duration and return visits controls cloud operation to improve the user experience and optimise management efficiency. A Certainty Factor quantifies the confidence with which management is applied such that actions enforced accommodate both predictable and unpredictable user behaviour."

Application Resource and Activity Footprinting to Influence Management in Next Generation Clouds ... "Standardised management solutions are an objective for the next generation of cloud to autonomically provision and configure resources in a manner generically applicable across platforms and applications. Cloud interoperability, a consequence of standardised operation, is desired in spite of the fact that platforms have variable management requirements and applications have various resource demands. In this paper, the footprint of an application developed at the Indian Institute of Technology in Delhi is explored, alongside consideration of its scalability as increasing volumes of requests are supported. The limiting network resource in this deployment is bandwidth availability at the server, which restricts the extent to which memory and CPU resources can be consumed, regardless of the number of application requests sent to the server. Definition of resource consumption relationships between attributes while servicing application requests leads to recommendations on server and network loading in a manner which optimises the overall balance of resource utilisation across all. This results in an average footprint across CPU, memory and network of 0.85 (max. of 1)."

The Standardisation of Cloud Computing: Trends in the State-of-the-Art and Management Issues for the Next Generation of Cloud ... "Roll-out of future cloud systems will be influenced by regulations from the standardisation bodies, if made available across the community. Trends in cloud deployment, operation and management to date have not been guided by any regulatory standards, and resources have been deployed in an ad hoc manner as demanded according to the business objectives of service providers. This is the least costly and most quickly revenue-returning business model. It is not however, the most cost-effective approach on a long-term basis: As a consequence of this roll-out model to date, the interoperability of resources deployed across clouds managed by different operators is restricted through inability to allocate workload to them in a regulated and controllable manner. The absence of standardised approaches to cloud management is therefore beginning to be accommodated such that the cost and performance advantages of interoperable operation may be exploited. In this paper, we review the state-of-the-art in standards across the field and trends in their development. We present a model which defines the drivers for cloud interoperability and constraints which restrict the extent to which this may realistically occur in future scalable solutions. This is supplemented with discussion on future challenges foreseen with regard to cloud operation and the way in which standards require provision such that cloud interoperation may be accommodated."

Energy Aware Scheduling across 'Green' Cloud Data Centres ... "Data centre energy costs are reduced when virtualisation is used as opposed to physical resource deployment to a volume sufficient to accommodate all application requests. Nonetheless, regardless of the resource provisioning approach, opportunities remain in the way in which they are made available and workload is scheduled. Cost incurred at a server is a function of its hardware characteristics. The objective of our approach is therefore to pack workload into servers, selected as a function of their cost to operate, to achieve (or as close to) the maximum recommended utilisation in a cost-efficient manner, avoiding instances where devices are under-utilised and management cost is incurred inefficiently. This is based on queuing theory principles and the relationship between packet arrival rate, service rate and response time, and recognises a similar exponential relationship between power cost and server utilisation to drive its intelligent selection for improved efficiency. There is a subsequent opportunity to power redundant devices off to exploit power savings through avoiding their management."

Cloud Services in Mobile Environments - The IU-ATC UK-India Mobile Cloud Proxy Function ... "Mobile networks currently play a key role in the evolution of the Internet due to exponential increase in demand for Internet-enabled mobile devices and applications. This has led to various demands to re-think basic designs of the current Internet architecture, investigating new and innovative ways in which key functionalities such as end-to-end connectivity, mobility, security, cloud services and future requirements can be added to its foundational core design. In this paper, we investigate, propose and design a functional element, known as the mobile cloud proxy, that enables the seamless integration and extension of core cloud services on the public Internet into mobile networks. The mobile cloud proxy function addresses current limitations in the deployment of cloud services in mobile networks tackling limitations such as dynamic resource allocation, transport protocols, application caching and security. This is achieved by leveraging advances in software-defined radios (SDRs) and networks (SDNs) to dynamically interface key functions within the mobile and Internet domains. We also present some early benchmarking results that feed into the development of the mobile cloud proxy to enable efficient use of resources for cloud based services such as social TV and crop imaging in mobile environments. The benchmarking experiments were carried out within the IU-ATC India-UK research project over a live international testbed which spans across a number of universities in the UK and India."

Performance Evaluation of Green Data Centre Management supporting Sustainable Growth of the Internet of Things ... "Network management is increasingly being customised for green objectives due to roll out of mission-critical applications across the Internet of Things and execution, in a number of cases, on battery-constrained devices. In addition, the volume of operations across the Internet of Things is attracting climate change concerns. While operational efficiency of wireless devices and in data centres (which support operation of the Internet of Things) should not be achieved at the expense of Quality of Service, optimisation opportunities should be exploited and inefficient resource use minimised. Green networking approaches however, are not yet standardised, and there is scope for novel middleware architectures. In this paper, we explore operational efficiency from the perspective of activities in data centres which support the Internet of Things. This includes evaluation of the effectiveness of mechanisms integrated into the e-CAB framework, an algorithm proposed by the authors to manage next generation data centres with green objectives. A selection of its policy mechanisms have been implemented in the NS-2 Network Simulator to evaluate performance; configuration decisions are described in this paper and presented alongside experimental results which demonstrate optimisations achieved. Focus lies, in particular, on rate adaptation of its context discovery protocol which is responsible for capturing real-time network state. Performance results reveal a small overhead when applying network management and validate improved efficiency through adaption in response to environment dynamics."

An Energy Aware Network Management Approach using Server Profiling in 'Green' Clouds ... "Clouds and data centres are significant consumers of power. There are however, opportunities for optimising carbon cost here as resource redundancy is provisioned extensively. Data centre resources, and subsequently clouds which support them, are traditionally organised into tiers; switch-off activity when managing redundant resources therefore occurs in an approach which exploits cost advantages associated with closing down entire network portions. We suggest however, an alternative approach to optimise cloud operation while maintaining application QoS: Simulation experiments identify that network operation can be optimised by selecting servers which process traffic at a rate that more closely matches the packet arrival rate, and resources which provision excessive capacity additional to that required may be powered off for improved efficiency. This recognises that there is a point in server speed at which performance is optimised, and operation which is greater than or less than this rate will not achieve optimisation. A series of policies have been defined in this work for integration into cloud management procedures; performance results from their implementation and evaluation in simulation show improved efficiency by selecting servers based on these relationships."

Context-Aware Characterisation of Energy Consumption in Data Centres ... "Carbon emissions are receiving increased attention and scrutiny in all walks of life and the ICT sector is no exception. With the increase in on-demand applications and services together with on-demand compute/storage facilities in server farms or data centres there are self-evident increases in the power requirements to maintain such systems. Proponents of the impact of increased carbon emissions when powering electrical systems in general however, regularly impress negative side-effects such as influence on climate change. Action is subsequently being encouraged to halt further environmental damage. The problem is explored in this paper from the point of view of carbon emissions from data centre operations and the development of energy-aware management and energy-efficient networking solutions. Data centre energy consumption costs drive the evaluation process within a Data Centre Energy-Efficient Context-Aware Broker (DCe-CAB) algorithm designed as an original solution to this significant carbon-contributing network scenario. In this paper, performance requirements and objectives of the DCe-CAB are defined, along with case study demonstration of the way in which it optimises selection and operation of data centres using context-awareness."

Towards the Simulation of Energy-Efficient Resilience Management ... "Energy-awareness and resilience are becoming increasingly important in network research. So far, they have been mainly considered independently from each other, but it has become clear that there are important interdependencies. Resilience should be achieved in a manner which is energy-efficient, and energy-efficiency objectives should respect the networks' need to be prepared to observe and react against disruptiveactivity. Meeting these complementary and sometimes conflicting research objectives demands novel strategies to support energy-efficient resilience management. However,the effective evaluation of cross-cutting energy and resilience management aspects is difficult to achieve using the tool support currently available. In this paper, we explore a range of network simulation environments and assess their ability to meet our energy and resilience modelling objectives as a function of their technical capabilities. Furthermore, ways in which these tools can be extended based on previous related implementations are also considered."

Autonomic Context-Aware Management in Interplanetary Communications Systems ... "Maintaining connectivity in deep-space communications is of critical importance to key missions and the ability to adapt node behavior “on-the-fly” can have dynamic benefits. Autonomic operation minimizes failure risk by performing local configurations using collected context data and on-board policies, improving response time to events, and reducing remote mission management expense. Herein, we evaluate cost-benefit impacts when a context-aware brokering algorithm developed to achieve autonomy is applied to interplanetary communications systems."

Energy-Aware Data Centre Management ... "Cloud computing is one way in which communications within and between data centres can be optimised by using resources which are physically close to the client, are exposed to lower electricity costs, contribute a smaller carbon footprint or have residual resources sufficient to fulfil Quality of Service requirements. Optimisation of activity involving data centres is a next generation network management objective due to continued growth in the number of plants and volume of operations within, factors which contribute to environmental concerns associated with energy consumption and carbon emissions from data centre facilities when renewable energy resources are not used. In this paper, we present an algorithmic mechanism developed to automate selection of a data centre in response to application requests, the Data Centre (DC) Energy-Efficient Context-Aware Broker (e-CAB). Through integration of the DCe-CAB in a case study scenario, operational improvement through reduction of carbon emission and balancing of other performance-related attributes including delay and financial cost is achieved, validating the DCe-CAB's positive impact."

"Operational Performance of the Context-Aware Broker (CAB): A Communication and Management System for Delay-Tolerant Networks (DTNs) ..." "The Context-Aware Broker is a policy-based management system developed by the authors to achieve autonomic communication in delay-tolerant networks. This is in recognition of environment challenges when operating in remote regions, and time, human, and financial resource costs incurred during mission-specific configuration. The Context-Aware Broker seeks to limit cost overheads through achieving a standardised transmitting approach, and operating autonomically to optimise reliability and sustainability levels achieved. In achieving its network management function, a cost-benefit impact is the consequence. Performance results from the Context-Aware Broker's deployment in ns-2.30 are presented and evaluated in this paper."

A Context-Aware Policy-Based Framework for Self-Management in Delay-Tolerant Networks (A Case Study for Deep Space Exploration) ... "Policy-based management allows the deployment of networks offering quality services in environments beyond the reach of real-time human control. A policy-based protocol stack middleware, the context-aware broker, has been developed by the authors to autonomically manage the remote deep space network. In this article example policy rules demonstrate the concept, and prototype results from ns-2.30 show the overall positive cost-benefit impact in an example scenario."

TCP's Protocol Radius: the Distance where Timers Prevent Communication ... "We examine how the design of the Transmission Control Protocol (TCP) implicitly presumes a limited range of path delays and distances between communicating endpoints. We show that TCP is less suited to larger delays due to the interaction of various timers present in TCP implementations that limit performance and, eventually, the ability to communicate at all as distances increase. The resulting performance and protocol radius metrics that we establish by simulation indicate how the TCP protocol performs with increasing distance radius between two communicating nodes, and show the boundaries where the protocol undergoes visible performance changes. This allows us to assess the suitability of TCP for long-delay communication, including for deep-space links."

A Reconfigurable Context-Aware Protocol Stack for Interplanetary Communication ... "This paper presents an approach to improve transmission success in delay-tolerant networks. The context- aware broker (CAB) grants networking autonomy when communicating in challenging environments, which suffer from conditions which are variable and exceed the limits for which terrestrial protocols were designed. Such environments currently require human intervention and the manual configuration of each communication - a seemingly simple decision of when to transmit becomes an issue in deep space due to planet movement. However, manual configuration is becoming unrealistic, given the scale on which communications occur. CAB automates the process by making intelligent decisions before transmission begins, and reconfigures as it progresses. It recognises the dynamic environments through which a transmission may pass and matches protocol capabilities with environmental constraints."

GPSDTN: Predictive-Velocity-Enabled Delay-Tolerant Networks for Arctic Research and Sustainability ... "A Delay-Tolerant Network (DTN) is a necessity for communication nodes that may need to wait for long periods to form networks. The IETF Delay Tolerant Network Research Group is developing protocols to enable such networks for a broad variety of Earth and interplanetary applications. The Arctic would benefit from a predictive velocity-enabled version of DTN that would facilitate communications between sparse, ephemeral, often mobile and extremely power-limited nodes. We propose to augment DTN with power-aware, buffer-aware location- and time-based predictive routing for ad-hoc meshes to create networks that are inherently location and time (velocity) aware at the network level to support climate research, emergency services and rural education in the Arctic. On Earth, the primary source of location and universal time information for networks is the Global Positioning System (GPS). We refer to this Arctic velocity-enabled Delay-Tolerant Network protocol as "GPSDTN" accordingly. This paper describes our requirements analysis and general implementation strategy for GPSDTN to support Arctic research and sustainability efforts."

Bringing IPTV to the Market through Differentiated Service Provisioning ... "The world of telecommunications continues to provide radical technologies. Offering the benefits of a superior television experience at reduced long-term costs, IPTV isthe newest offering. Deployments, however, are slow to be rolled out; the hardwareand software support necessary is not uniformly available.This paper examines the challenges in providing IPTV services and the limitations indevelopments to overcome these challenges. Subsequently, a proposal is made whichattempts to help solve the challenge of fulfilling real-time multimedia transmissionsthrough provisioning for differentiated services. Initial implementations in Opnet aredocumented, and the paper concludes with an outline of future work."

Improving the Performance of Asynchronous Communication in Long-Distance Delay-Sensitive Networks through the Development of Context-Aware Architectures ... "Context-awareness is inherent in anticipated interplanetary missions. Swarm technologies by D'Arrigo, P, Santandrea, S. (2005) use context-awareness in short-haul networks between components, and long-distance networks allow communication with Earth. However, the propagation delays limit real-time communications, deep space being an environment in which the speed of light becomes a restriction. Therefore, the development of a protocol stack which is adaptive to application requirements and external influences will help to maximise communication synchronicity. As part of a first year doctorate research programme, this paper correlates current stack functionalities with interplanetary application requirements. A redesigned stack proposes to resolve this misalignment. Context-awareness is incorporated, enabling intelligent protocol selection using application layer knowledge and environmental information, with particular attention given to transport protocols. The paper concludes by considering transport protocol characteristics when deployed beside a context-aware layer, with the long-term aim being the development of a transport protocol suitable for deployment in the state-of-the-art context-aware stack."

Showcasing my Students' Work:

Immortal Bits: Managing Our Digital Legacies ... "An Ulster University student designed a website to manage and deliver digital assets after death.

There comes a time in our lives when we think about "getting our affairs in order" in anticipation of our inevitable demise. This might entail gathering important documents in a secure place and coordinating access for family or friends. But now that many of our assets are digital - photos, videos, documents, bank accounts - how do we arrange secure access for our heirs? While a master's degree student at Ulster University, Mark Hetherington designed the My Digital Legacy Web service to meet both client and beneficiary needs."

Published Paper Reviews

Discovery in the Internet of Things: the Internet of Things ... "The challenge of making sense of data collected in the Internet of Things (IoT), such that the “needle” can be found in the digital haystack, is the focus of this work. This is a significant area of research in the next phase of IoT development, to allow the IoT potential to be more fully achieved; the envisaged IoT is not currently being exploited due to limited hardware and software developments. This results in challenges related to the collection of data from smart cities and its organization, recognition, and use. Currently, these operations do not take place in a standardized way, resulting in ad hoc device- and application-specific deployments. Furthermore, as the IoT continues to evolve, the achievement of a cohesive, interoperable, and global system becomes increasingly unlikely."

Key Challenges for the Smart City: Turning Ambition into Reality ... "Even if the initiatives are sometimes uncoordinated, they bring the city each time a step closer to becom[ing] a true smart city.” While they do not define the context in which they consider a “true smart city” to exist, the authors capture the current roll-out of experimental technology, and the restricted existence of a “true smart city” where technologies are standardized, interoperable, and able to be easily integrated ..."

Growing Closer on Facebook: Changes in Tie Strength through SOcial Network Site Use ... "Relationships, measured in the strength of a tie between people, can be characterized according to interactions, and are dependent on what, why, and how we communicate, and the frequency of communication activities. Given the rise in new technologies, and dependencies on these to support day-to-day life, communications are changing in each aspect; we can therefore assume that our relationships are similarly changing ..."

A New Virtual Network Static Embedding Strategy within the Cloud's Private Backbone Network ... "Cloud computing is described, in this work, as overcoming the main issues of the computational world. It may be more accurately considered as compensating for resource availability, provisioning, and allocation decisions, while simultaneously introducing security and management cost impacts ..."

Hierarchical Virtual Machine Consolidation in a Cloud Computing System ... "As a timely contribution to energy efficiency challenges across data centers and clouds, the authors propose a virtual machine (VM) consolidation approach to limit the number of physical machines provisioned and active. By dealing with the problem of optimized component provision, ..."

Characterizing Hypervisor Vulnerabilities in Cloud Computing Servers ... "Security, in every application of the concept, is a constantly moving target: once defenders identify and patch a vulnerability, attackers move on to the next weak spot. Efforts are therefore required to track the path of exposures through ..."

A Study on Virtual Machine Deployment for Application Outsourcing in Mobile Cloud Computing ... "Cloud architectures support data center operations through more optimized responses to application requests. Performance is improved by placing virtual resources closer to application users, with capacity, which fulfils quality of service (QoS) ..."

A Survey of Context Data Distribution for Mobile Ubiquitous Systems ... "Effective context awareness is pertinent across networks today. The fact that a standardized solution has not yet been established is testament to the ongoing evolution and volatility of networks and the technologies involved ..."
page last updated: 2nd February 2023