Model Answers – All Questions

Q1. The Shared Responsibility Model is a foundational concept in cloud security. Explain the model in detail. Differentiate the security responsibilities between the cloud provider and the customer across the three main service models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Provide one specific example of a customer responsibility for each model. (6 Marks)

Overview of the Shared Responsibility Model

The Shared Responsibility Model is a cloud-security framework. It clearly delineates which security obligations belong to the Cloud Service Provider (CSP). It also clarifies which obligations belong to the customer. The fundamental premise is straightforward.

  • The CSP manages the cloud’s security. This responsibility includes the physical infrastructure, hypervisors, global network fabric, and the managed services themselves.
  • Conversely, the customer is responsible for security IN the cloud. This includes everything the customer deploys, configures, or stores on top of that infrastructure.

This division avoids duplication of effort, establishes accountability, and forms the contractual and operational basis for all cloud security engagements. Misunderstanding this boundary is the root cause of many high-profile cloud data breaches.

Difference Between Infrastructure as a Service (IaaS) and Platform as a Service (PaaS)

Infrastructure as a Service (IaaS)

  • Definition: Provides virtualized computing resources over the internet.
  • Control: Offers maximum control over the infrastructure environment, including server configurations, storage, and networking.
  • Management: Customers are responsible for managing the operating systems, applications, and stored data.
  • Use Case: Suitable for businesses that want to run their own applications and require flexibility in configuring the environment and scalability.
  • Example: Amazon EC2, Microsoft Azure Virtual Machines.

Platform as a Service (PaaS)

  • Definition: Provides a platform allowing customers to develop, run, and manage applications without dealing with the underlying infrastructure.
  • Control: Offers less control compared to IaaS, focusing on the application layer.
  • Management: Customers are responsible for managing applications and data, while the provider manages the underlying infrastructure and platform services.
  • Use Case: Ideal for developers who want to focus on writing code and developing applications without worrying about setting up and managing the underlying infrastructure.
  • Example: Google App Engine, AWS Elastic Beanstalk.

Infrastructure vs Platform

Infrastructure refers to the fundamental computing resources needed to run software. This includes virtual machines, servers, storage, networking, load balancers, and similar low-level resources. In an infrastructure model, the provider gives you the raw building blocks. You are still responsible for managing the operating system, runtime environment, and middleware. You also have to manage applications and much of the security configuration.

Platform sits one level above infrastructure. It provides not only the underlying computing resources, but also the managed environment needed to build, deploy, and run applications. This may include the operating system, runtime, databases, web servers, development frameworks, and deployment tools. In a platform model, the user focuses mainly on the application and data. The provider manages most of the underlying environment.

Summary Table

AspectIaaSPaaS
ControlHigh (OS, apps, data)Medium (apps, data only)
ManagementCustomer manages everythingProvider manages infrastructure
Use CaseFlexible infrastructure needsDevelopment-focused
ExamplesAWS EC2, Azure VMsGoogle App Engine, AWS Elastic Beanstalk

1.2  Responsibilities across Service Models

LayerCSP ResponsibilityCustomer Responsibility
IaaSPhysical data-centre, network hardware, hypervisor, storage hardwareOS patching, middleware, runtime, application code, data encryption, firewall rules, IAM
PaaSInfrastructure + OS + runtime environment + platform services (databases, message queues)Application code, user data, access control within the platform, secure coding practices
SaaSEntire stack: infrastructure, platform, application logic, availability, patchingData classification, user account management, endpoint device security, correct usage of features

1.3  Specific Customer Responsibility Examples

  • IaaS (e.g., AWS EC2): The customer must patch and harden the guest OS. AWS manages the Nitro hypervisor and physical hosts. If there is a vulnerability in an unpatched Linux kernel inside the VM, the customer must resolve it. It is entirely their problem. Failure to apply security patches is a leading cause of IaaS compromises.
  • PaaS on AWS (for example, AWS Elastic Beanstalk or AWS App Runner): The customer must ensure that the application code is secure. The customer is responsible for its security. It must be free of OWASP (Open Worldwide Application Security Project) Top 10 vulnerabilities such as SQL injection. AWS manages the underlying infrastructure. This includes operating system patching, platform maintenance, and scaling support. However, the developer is responsible for insecure application logic, such as code that accepts unvalidated user input.
  • SaaS on AWS (for example, Amazon QuickSight or Amazon Connect): The customer must manage user access carefully. They should enforce strong authentication, such as multi-factor authentication. AWS secures the underlying SaaS infrastructure and service platform. If a user account is given excessive permissions, it creates a security risk. Without MFA enabled, the risk could lead to a customer-side security failure. A phishing attack that steals credentials could exploit this vulnerability.

1.4  Why the Model Matters

Without a clear shared responsibility model, organizations assume the CSP handles everything. This leads to misconfigured S3 buckets, open security groups, and plain-text credential storage. These are all customer-side failures. The model drives accountability. It also informs SLA (Service Level Agreement) negotiation. Additionally, the model is the basis for cloud security audit frameworks such as CSA CCM and ISO 27017.

Q2. Effective Identity and Access Management (IAM) is critical for cloud security. Differentiate between Authentication and Authorization. Explain the role of SAML (Security Assertion Markup Language) in enabling federated identity and Single Sign-On (SSO). Describe a scenario where implementing Multi-Factor Authentication (MFA) is essential. (6 Marks)

Federated Identity and Single Sign-On (SSO)

Federated Identity is an authentication process that allows users to access multiple services across different security domains with a single set of credentials. This concept eliminates the need for multiple usernames and passwords, streamlining user experience and enhancing security. Federated identity works by creating a trust relationship between the Identity Provider (IdP) and multiple Service Providers (SPs). The IdP authenticates the user and provides them with security assertions that can be accepted by one or more SPs.

Single Sign-On (SSO) is a user authentication process that enables a user to log in once and gain access to multiple applications without needing to log in again for each application. SSO is often implemented in conjunction with federated identity, allowing users to authenticate with an IdP and access various SPs seamlessly.

Key Benefits of Federated Identity and SSO

  • Reduced Credential Fatigue: Users remember fewer passwords, decreasing the likelihood of weak password selections.
  • Enhanced Security: Centralized authentication can enforce stronger password policies and multi-factor authentication (MFA).
  • Improved User Experience: Users can quickly switch between applications without repeated logins, increasing productivity.
  • Simplified User Management: Administrators can manage user access policies from a single point, making onboarding and offboarding more efficient.

Common protocols supporting federated identity and SSO include SAML (Security Assertion Markup Language), OAuth, and OpenID Connect. These protocols facilitate secure communication between the IdP and SPs, ensuring that user identities are properly verified and authorized.

2a)  Authentication vs. Authorization

These two concepts are fundamentally distinct yet deeply interdependent. Together they form the backbone of access control.

AspectAuthentication (AuthN)Authorization (AuthZ)
Question askedWho are you?What are you allowed to do?
PurposeVerify identityEnforce access policy
When it occursBefore any resource accessAfter successful authentication
Credentials usedPassword
Certificate
Biometric
OTP
Roles
Permissions
ACLs (Access Control Lists)
Policies
ExampleVerifying username + password at loginChecking if user has admin role to delete a record
Standard / ProtocolOAuth 2.0
SAML
OpenID Connect
RBAC
ABAC
XACML
IAM Policies

2b)  Explain the role of SAML (Security Assertion Markup Language) in enabling federated identity and Single Sign-On (SSO).

SAML and Federated Identity / SSO

SAML (Security Assertion Markup Language) is an XML-based open standard.

It is used for exchanging authentication and authorization data between an Identity Provider (IdP) and a Service Provider (SP). It underpins federated identity.

This means you can use one set of credentials to access resources across multiple, separately administered security domains.

Core participants in a SAML flow:

  • Identity Provider (IdP): Authenticates the user and issues digitally signed SAML Assertions (e.g., Okta, Azure Active Directory, ADFS).
  • Service Provider (SP): The cloud application that trusts the IdP and grants access based on the assertion (e.g., Salesforce, AWS Console).
  • Principal: The end user whose identity is being asserted.

SP-Initiated SSO flow (most common in cloud environments):

  • User attempts to access the SP (e.g., AWS Management Console).
  • SP detects no active session and redirects the user to the IdP with a SAML AuthnRequest.
  • IdP challenges the user to authenticate (password + MFA).
  • On success, IdP generates a digitally signed SAML Response containing Assertions (identity claims, attributes, session validity period).
  • Browser posts the SAML Response to the SP’s Assertion Consumer Service (ACS) URL.
  • SP validates the digital signature using the IdP’s pre-shared certificate. It then parses the Assertion and creates a local session. This grants access without the user ever touching a local password.

Security benefits of SAML/SSO:

  • Eliminates password proliferation across services, reducing credential-theft surface.
  • Centralizes authentication enforcement – MFA can be mandated at the IdP for all services at once.
  • Enables rapid de-provisioning: disabling an account at the IdP immediately blocks access to all federated SPs.
  • Assertions are time-limited and signed, preventing replay and forgery attacks.

2c)  Scenario Where MFA is Essential

Scenario: A financial services firm provides remote access to its cloud-hosted Treasury Management System (TMS). This access is given to 50 employees who work from home. The system contains sensitive transaction data, wire-transfer capabilities, and customer PII.

Why MFA is essential here:
  • Password-only authentication is insufficient in this context. Phishing campaigns routinely steal credentials; a single compromised employee password would give an attacker direct access to wire-transfer functionality.
  • The workforce is geographically distributed. They authenticate over the public internet. There is no network perimeter to absorb the risk.
  • Regulatory frameworks applicable to financial services, such as PCI-DSS, RBI guidelines, and SWIFT Customer Security Program, require MFA. This is essential for accessing critical payment systems.
  • The blast radius of a single account compromise is enormous. An attacker can initiate fraudulent wire transfers worth millions in minutes.

MFA layers multiple proofs for login.
You know: Password.
You have: TOTP app (like Google Authenticator) or FIDO2 hardware key.
You are (optional): Biometrics (like fingerprint) on registered device.

This defence-in-depth blocks attackers. Phishing might steal your password, but they cannot proceed without the second factor.

Q3. Cloud infrastructure security involves protecting the core components of your deployment. Discuss the importance of securing compute resources. Outline three distinct best practices for hardening a virtual machine (VM) image to minimize its attack surface before deployment in a cloud environment. (6 Marks)

3.1  Importance of Securing Compute Resources

Compute resources (VMs, containers, bare-metal instances) are the execution environment for every application workload. A compromised VM can become a pivot point for lateral movement across a cloud VPC. It can also serve as an exfiltration node for data theft, a cryptomining host, or a botnet participant. Cloud VMs are internet-routable and internet-exposed. They are also provisioned at scale, sometimes in the thousands of identical images. A vulnerability in a base image is amplified across every instance launched from it. This makes image hardening a force-multiplier security control.

The NIST SP 800-190 and CIS Benchmarks both cite pre-deployment image hardening as a critical preventive control. They argue that it is far more cost-effective to eliminate attack surface before deployment. It is better to do this than to detect and respond to exploitation after the fact.

3.2  Three Best Practices for VM Image Hardening

Best Practice 1: Minimal OS Installation and Attack Surface Reduction

Start from a minimal base image (e.g., AWS-provided Amazon Linux 2023 minimal, Ubuntu Server minimal, or CIS-hardened golden AMI). Remove every package, service, and daemon not required for the workload’s specific function. Apply the principle: if it is not needed, it is not installed.

  • Disable and remove network-accessible services not required (Telnet, FTP, rsh, rlogin, NFS if unused, etc.) to eliminate entry points.
  • Remove compilers (gcc, make) and scripting interpreters from production images. This will hamper an attacker who achieves initial access. It prevents them from compiling or running exploit payloads.
  • Disable unnecessary kernel modules. Use tools like Lynis, OpenSCAP, or CIS-CAT to benchmark the stripped-down image against a known-good security baseline before snapshotting.

Best Practice 2: Apply All Security Patches and Establish a Patch Pipeline

A VM image should be fully patched against known CVEs at the moment it is built. An organization should implement an automated image-build pipeline (e.g., HashiCorp Packer + AWS Systems Manager Patch Manager or Azure Image Builder) that:

  • Starts from the latest upstream base image.
  • Applies all OS security patches (apt-get upgrade –only-upgrade security or yum update –security).
  • Runs automated vulnerability scanning (Amazon Inspector, Trivy, or Qualys) against the freshly patched image.
  • Tags the image with build timestamp and vulnerability-scan pass/fail status.
  • Automatically deprecates images older than a defined threshold (e.g., 90 days) to force relaunching with newer images.
  • Equifax 2017 breach: Hackers exploited an unpatched Apache Struts flaw. It sat open for months.
  • Fix: Automated patch pipelines stop this. They patch fast and log everything for audits.

Best Practice 3: Enforce Least Privilege, Remove Default Credentials, and Lock Down Access

Default configurations are the adversary’s greatest ally. Before freezing a VM image:

  • Change or remove all default passwords. For Linux, disable password-based SSH login entirely; use only SSH key-pair authentication. Remove default cloud-provider accounts (ec2-user default shell history should be cleared; root login should be disabled).
  • Remove or disable all default or test user accounts. Audit /etc/passwd for stale or unnecessary accounts.
  • Apply OS-level access controls. Enable SELinux (enforcing mode) or AppArmor. These tools apply mandatory access control policies. These policies confine processes even if they are exploited.
  • Set appropriate file-system permissions. Critical files (/etc/passwd, /etc/shadow, SSH host keys) should have restrictive permissions. Use umask 027 as the default.
  • Disable root SSH login by setting PermitRootLogin no in sshd_config. Restrict SSH to specific management CIDRs. Use host-based firewall rules like iptables, nftables, or firewalld.

Q4. Securing microservices presents unique challenges compared to monolithic applications. Describe two security risks that are specific to a microservices architecture. Explain how an API Gateway can be used as a critical security control point for securing microservices-based applications. (6 Marks)

4a)  Two Security Risks Specific to Microservices

Risk 1: Expanded East-West Attack Surface (Lateral Movement via Service-to-Service Communication)

In a monolithic application, components communicate via in-process function calls that never cross a network boundary. In a microservices architecture, hundreds of services communicate over the network using REST, gRPC, or message queues. This internal (east-west) traffic is frequently un-encrypted and unauthenticated in poorly-designed deployments.

If an attacker breaks into one small internal service, they can use it as a stepping stone. They can reach more important services, like payments or databases. Internal traffic is usually trusted by default. Because of this, attackers may move around inside the system without ever hitting the external firewall.

Microservices explode the attack surface. Hackers hitting one service over HTTP can pivot everywhere, unlike monoliths where they need deep code access. OWASP (Open Worldwide Application Security Project) API Security Top 10’s “Broken Object Level Authorization” (BOLA). Attackers trick one service into accessing data or objects they shouldn’t in others.

Risk 2: Insecure and Unmanaged Secrets Distribution (Credential Sprawl)

Microservices require a large number of secrets to function. These include database connection strings, API keys for third-party services, TLS certificates, JWT signing keys, and inter-service authentication tokens. In a monolith, these might be stored in a single configuration file. In a microservices architecture with 50+ services, each potentially independently deployable by a different team, secrets management becomes exponentially harder. Common failures include:

  • Secrets hard-coded in Docker images or source code. This violates the 12-Factor App principle.
  • Secrets are stored in environment variables that get logged by orchestration platforms. Developers spin up personal test deployments with production credentials. Secrets are never rotated because the rotation process requires coordinated redeployment of many services simultaneously.
  • A 2023 GitGuardian report found secrets in millions of public repositories, many from microservices development workflows.

4b)  API Gateway as a Critical Security Control Point

  • Single entry point and centralized security
    The API Gateway is positioned in front of all microservices. It forces all client requests to pass through it. This reduces the number of directly exposed endpoints. It also provides a single place to enforce security policies consistently.
  • Centralized authentication and authorization
    The gateway can validate JWTs, OAuth tokens, or API keys for every request. It enforces role‑based or attribute‑based access to specific routes. This removes the need for each microservice to re-implement the same logic. It also reduces the chance of missing or inconsistent checks.
  • Traffic protection and abuse prevention
    The gateway can apply rate limiting, throttling, and IP‑based rules. These measures protect backend services from brute‑force attacks. They also guard against denial‑of‑service‑style flooding and API abuse. This shielding improves resilience of the whole system.
  • Request filtering and attack mitigation
    It can inspect incoming requests. It can reject malformed payloads. It integrates with a Web Application Firewall (WAF) to block common attacks such as injection attempts.
  • Visibility, logging, and auditing
    All traffic passes through the gateway. This makes it easier to log who accessed which API. It also logs when and what happened. This supports monitoring, incident investigation, and compliance auditing for the entire microservices estate.
  • Hiding internal topology
    Clients only see the gateway’s public endpoint, not the internal host names or ports of individual microservices. This reduces unnecessary exposure and makes it harder for attackers to map and directly target internal components.

Q5. Serverless architectures are becoming increasingly popular but introduce new security considerations. Explain the security risk known as “Event Injection,” where an attacker manipulates event data to exploit a serverless function. Outline two best practices for implementing secure permissions for serverless functions, aligning with the Principle of Least Privilege. (6 Marks)

5a)  Event Injection in Serverless Architectures

Event Injection is a security risk in serverless architecture. It occurs when an attacker manipulates the event data sent to a serverless function. This manipulation causes the function to perform unintended or harmful actions. Serverless functions are commonly triggered by events from sources such as APIs, queues, storage uploads, or database changes. A maliciously crafted event can exploit weak input validation. It can also exploit flawed logic inside the function. As a result, the function may process false instructions, access unauthorized resources, alter data improperly, or trigger further malicious activity. Thus, the attacker abuses the trust placed in event input rather than attacking the infrastructure directly.

5b)  Two Best Practices for Least Privilege in Serverless Functions

Best Practice 1: One IAM Role per Function (Function-Level Permission Isolation)

The most common serverless security anti-pattern involves assigning a single broadly-permissioned IAM role. This role is given to all Lambda functions in an application. This practice often occurs because it is operationally convenient. This violates the Principle of Least Privilege (PoLP). As a result, a vulnerability in any function inherits the permissions of all other functions.

The correct approach is to create a dedicated, minimal IAM execution role for each Lambda function, granting only the exact AWS actions on the exact resources required for that function’s specific task:

  • A function that reads from a specific S3 bucket should have: s3:GetObject on arn:aws:s3:::bucket-name/* and nothing else. It should not have s3:PutObject, s3:DeleteObject, or access to any other bucket.
  • Functions must not be granted administrator-level permissions. They also must not have the ability to modify their own IAM roles.

Blast-radius containment: If an event injection attack succeeds, it may achieve code execution in a function. In this case, the attacker’s AWS API call capability is limited. It is bounded by that function’s role. With function-level isolation, they can only access the specific S3 bucket or DynamoDB table that function legitimately needs. They cannot access the entire AWS account.

Best Practice 2: Enforce Resource-Based Policies and VPC Isolation for Sensitive Functions

Least privilege for serverless is not only about what the function can do (IAM execution role) but also about who can invoke the function and where the function can communicate.

  • Resource-based policies (Lambda function policies): Use aws lambda add-permission to specify restrictions. Define which AWS accounts, IAM principals, and services can invoke the function. The specific API Gateway resource can invoke a payment-processing Lambda. The specific SNS topic in the same account also has this privilege. It should not be invocable publicly or by any authenticated AWS caller. This prevents confused deputy attacks and unauthorised invocation from compromised sibling accounts.
  • VPC placement and Security Groups: Deploy functions that access private databases or internal services inside a VPC. Use Security Group rules that restrict egress to only the required ports. These rules should also specify destination CIDRs (Classless Inter-Domain Routing). This means even if the function is compromised, it cannot make arbitrary outbound connections to attacker-controlled infrastructure.
  • Environment variable encryption: Store all secrets in AWS Secrets Manager. Encrypt them with a customer-managed KMS key. Retrieve them at runtime via API calls. Never store them in environment variables, as these are visible in the Lambda console and CloudFormation templates.
  • Timeouts and concurrency limits: Set aggressive function timeouts (e.g., 30 seconds maximum rather than 15 minutes) and reserved concurrency limits to bound the blast radius of a DoS attack or an infinite-loop injection payload.

Leave a Reply