Cloud Native and IT Security News and Trends | The New Stack https://thenewstack.io/security/ Wed, 27 Sep 2023 15:49:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 LLMs and Data Privacy: Navigating the New Frontiers of AI https://thenewstack.io/llms-and-data-privacy-navigating-the-new-frontiers-of-ai/ Wed, 27 Sep 2023 17:00:30 +0000 https://thenewstack.io/?p=22717958

Large Language Models (LLMs) like ChatGPT are revolutionizing how we interact online, offering unmatched efficiency and personalization. But as these

The post LLMs and Data Privacy: Navigating the New Frontiers of AI appeared first on The New Stack.

]]>

Large Language Models (LLMs) like ChatGPT are revolutionizing how we interact online, offering unmatched efficiency and personalization. But as these AI-driven tools become more prevalent, they bring significant concerns about data privacy to the forefront. With models like OpenAI’s ChatGPT becoming staples in our digital interactions, the need for robust confidentiality measures is more pressing than ever.

I have been thinking about security for generative AI lately. Not because I have tons of private data but because my clients do. I also need to be mindful of taking their data and manipulating it or analyzing it in SaaS-based LLMs, as doing so could breach privacy. Numerous cautionary tales exist already of professionals doing this either knowingly or unknowingly. Among my many goals in life, being a cautionary tale isn’t one of them.

Current AI Data Privacy Landscape

Despite the potential of LLMs, there’s growing apprehension about their approach to data privacy. For instance, OpenAI’s ChatGPT, while powerful, refines its capabilities using user data and sometimes shares this with third parties. Platforms like Anthropic’s Claude and Google’s Bard have retention policies that might not align with users’ data privacy expectations. These practices highlight an industry-wide need for a more user-centric approach to data handling.

The digital transformation wave has seen generative AI tools emerge as game-changers. Some industry pundits even compare their transformative impact to landmark innovations like the internet. The impact of the internet is likely to be just as great, if not greater. As the adoption of LLM applications and tools skyrockets, there’s a glaring gap: preserving the privacy of data processed by these models by securing the inputs of training data and any data the model outputs. This presents a unique challenge. While LLMs require vast data to function optimally, they must also navigate a complex web of data privacy regulations.

Legal Implications and LLMs

The proliferation of LLMs hasn’t escaped the eyes of regulatory bodies. Frameworks like the EU AI Act, General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have set stringent data sharing and retention standards. These regulations aim to protect user data, but they also pose challenges for LLM developers and providers, emphasizing the need for innovative solutions that prioritize user privacy.

Top LLM Data Privacy Threats

In August, the Open Web Application Security Project (OWASP) released the Top 10 for LLM Applications 2023, a comprehensive guide to the most critical security risks to LLM applications. One such concern is training data poisoning. This happens when changes to data or process adjustments introduce vulnerabilities, biases, or even backdoors. These modifications can endanger the security and ethical standards of the model. To tackle this, confirming the genuineness of the training data’s supply chain is vital.

Using sandboxing can help prevent unintended data access, and it’s crucial to vet specific training datasets rigorously. Another challenge is supply chain vulnerabilities. The core foundation of LLMs, encompassing training data, ML models and deployment platforms, can be at risk due to weaknesses in the supply chain. Addressing this requires a comprehensive evaluation of data sources and suppliers. Relying on trusted plugins and regularly engaging in adversarial testing ensures the system remains updated with the latest security measures.

Sensitive information disclosure is another challenge. LLMs might unintentionally disclose confidential data, leading to privacy concerns. To mitigate this risk, it’s essential to use data sanitization techniques. Implementing strict input validation processes and hacker-driven adversarial testing can help identify potential vulnerabilities.

Enhancing LLMs with plugins can be beneficial but also introduce security concerns due to insecure plugin design. These plugins can become potential gateways for security threats. To ensure these plugins remain secure, it’s essential to have strict input guidelines and robust authentication methods. Continuously testing these plugins for security vulnerabilities is also crucial.

Lastly, the excessive agency in LLMs can be problematic. Giving too much autonomy to these models can lead to unpredictable and potentially harmful outputs. It’s essential to set clear boundaries on the tools and permissions granted to these models to prevent such outcomes. Functions and plugins should be clearly defined, and human oversight should always be in place, especially for significant actions.

Three Approaches to LLM Security

There isn’t a one-size-fits-all approach to LLM security. It’s a balancing act between how you want to interact with both internal and external sources of information and the users of those models. For example, you may want a customer-facing and internal chatbot to collate private institutional knowledge.

Data Contagion Within Large Language Models (LLMs)

Data contagion of Large Language Models (LLMs) is the accidental dissemination of confidential information via a model’s inputs. Given the intricate nature of LLMs and their expansive training datasets, ensuring that these computational models do not inadvertently disclose proprietary or sensitive data is imperative.

In the contemporary digital landscape, characterized by frequent data breaches and heightened privacy concerns, mitigating data contagion is essential. An LLM that inadvertently discloses sensitive data poses substantial risks, both in terms of reputational implications for entities and potential legal ramifications.

One approach to address this challenge encompasses refining the training datasets to exclude sensitive information, ensuring periodic model updates to rectify potential vulnerabilities and adopting advanced methodologies capable of detecting and mitigating risks associated with data leakage.

Sandboxing Technique LLMs

Sandboxing is another strategy to keep data safe when working with AI models. Sandboxing entails the creation of a controlled computational environment wherein a system or application operates, ensuring that its actions and outputs remain isolated and don’t make their way outside of the systems.

For LLMs, the application of sandboxing is particularly salient. By establishing a sandboxed environment, entities can regulate access to the model’s outputs, ensuring interactions are limited to authorized users or systems. This strategy enhances security by preventing unauthorized access and potential model misuse.

With over 300,000 plus models available on HuggingFace and exceptionally powerful large-language models readily available, it’s within reason for those enterprises that have the means to deploy their own EnterpriseGPT that can remain private.

Effective sandboxing necessitates the implementation of stringent access controls, continuous monitoring of interactions with the LLM and establishing defined operational parameters to ensure the model’s actions remain within prescribed limits.

Data Obfuscation Before LLM Input

The technique of “obfuscation” has emerged as a prominent strategy in data security. Obfuscation pertains to modifying original data to render it unintelligible to unauthorized users while retaining its utility for computational processes. In the context of LLMs, this implies altering data to remain functional for the model but become inscrutable for potential malicious entities. Given the omnipresent nature of digital threats, obfuscating data before inputting it into an LLM is a protective measure. In the event of unauthorized access, the obfuscated data, devoid of its original context, offers minimal value to potential intruders.

Several methodologies are available for obfuscation, such as data masking, tokenization and encryption. It is vital to choose a technique that aligns with the operational requirements of the LLM and the inherent nature of the data being processed. Selecting the right approach ensures optimal protection while preserving the integrity of the information.

In conclusion, as LLMs continue to evolve and find applications across diverse sectors, ensuring their security and the integrity of the data they process remains paramount. Proactive measures, grounded in rigorous academic and technical research, are essential to navigate the challenges posed by this dynamic domain.

OpaquePrompts: Open Source Obfuscation for LLMs

In response to these challenges, OpaquePrompts has recently been released on Github by Opaque Systems. It preserves the privacy of user data by sanitizing it, ensuring that personal or sensitive details are removed before interfacing with the LLM. By harnessing advanced technologies such as confidential computing and trusted execution environments (TEEs), OpaquePrompts guarantees that only the application developer can access the full scope of the prompt’s data. OpaquePrompts’s suite of tools is available on GitHub for those interested in delving deeper.

OpaquePrompts is engineered for scenarios demanding insights from user-provided contexts. Its workflow is comprehensive:

  • User Input Processing: LLM applications create a prompt, amalgamating retrieved-context, memory and user queries, which is then relayed to OpaquePrompts.
  • Identification of Sensitive Data: Within a secure TEE, OpaquePrompts utilizes advanced NLP techniques to detect and flag sensitive tokens in a prompt.
  • Prompt Sanitization: All identified sensitive tokens are encrypted, ensuring the sanitized prompt can be safely relayed to the LLM.
  • Interaction with LLM: The sanitized prompt is processed by the LLM, which then returns a similarly sanitized response.
  • Restoring Original Data: OpaquePrompts restores the original data in the response, ensuring users receive accurate and relevant information.

The Future: Merging Confidentiality with LLMs

In the rapidly evolving landscape of Large Language Models (LLMs), the intersection of technological prowess and data privacy has emerged as a focal point of discussion. As LLMs, such as ChatGPT, become integral to our digital interactions, the imperative to safeguard user data has never been more pronounced. While these models offer unparalleled efficiency and personalization, they also present challenges in terms of data security and regulatory compliance.

Solutions like OpaquePrompts are one of many that will come that exemplify how data privacy at the prompt layer can be a game-changer. Instead of venturing into the daunting task of self-hosting a Foundational Model, LLM focusing on prompt-layer privacy provides data confidentiality from the get-go without requiring the expertise and costs associated with in-house model serving. This simplifies LLM integration and reinforces user trust, underscoring the commitment to data protection.

It is evident that as we embrace the boundless potential of LLMs, a concerted effort is required to ensure that data privacy is not compromised. The future of LLMs hinges on this delicate balance, where technological advancement and data protection coalesce to foster trust, transparency and transformative experiences for all users.

The post LLMs and Data Privacy: Navigating the New Frontiers of AI appeared first on The New Stack.

]]>
Secure Go APIs with Decentralized Identity Tokens, Part 1 https://thenewstack.io/secure-go-apis-with-decentralized-identity-tokens-part-1/ Wed, 27 Sep 2023 10:00:38 +0000 https://thenewstack.io/?p=22719209

APIs enable the exchange of data and functionality between different software applications, making them a crucial component of modern software

The post Secure Go APIs with Decentralized Identity Tokens, Part 1 appeared first on The New Stack.

]]>

APIs enable the exchange of data and functionality between different software applications, making them a crucial component of modern software systems. However, as we rely on APIs more and more, ensuring their security becomes essential to protect sensitive data, maintain user privacy, and prevent unauthorized access or misuse of resources.

The rise of decentralized identity tokens adds a new dimension to API security. Traditionally, API authentication and authorization relied heavily on centralized identity providers, such as username/password combinations or access tokens issued by third-party services like OAuth. While these approaches have been widely used and effective, they introduce a level of dependency on centralized authorities and increase the risk of data breaches and single points of failure.

Decentralized identity tokens, on the other hand, leverage decentralized identity frameworks and technologies like blockchain or distributed ledgers to provide a more secure and privacy-enhancing alternative. They enable individuals to have greater control over their identities and authenticate themselves without relying on a central authority.

By using decentralized identity tokens, APIs get increased protection against identity theft and impersonation attacks.

Securing your Go APIs with decentralized identity tokens is a good practice to enhance the security and trustworthiness of your application. Here’s an overview of how you can secure your Go APIs with decentralized identity tokens

Choose a decentralized identity framework. There are several frameworks available, such as Ethereum-based solutions like uPort, Sovrin or Hyperledger Indy. Select a framework that aligns with your requirements and integrates well with Go.

Generate and issue tokens. Once you have chosen a framework, you need to generate and issue identity tokens to your users. Typically, this involves a registration process, where users prove their identity and receive a unique token.

Validate tokens in Your API. In your Go APIs, you need to implement token validation logic. This usually involves verifying the signature of the token using a public key associated with the decentralized identity framework. You can find libraries or packages that simplify this process, such as github.com/golang-jwt/jwt for Go.

Extract claims and authenticate users. After validating the token, you can extract the claims embedded within it. Claims typically include information about the user, such as their identity, roles or permissions. You can then use these claims to authenticate the user and authorize their access to specific API resources.

Implement authorization checks. Once you have authenticated the user, you can enforce authorization checks based on the user’s claims. For example, you might allow or deny access to certain API endpoints or data based on the user’s role or permissions. Implement appropriate authorization logic in your API handlers or middleware.

Handle token expiry and revocation. Decentralized identity tokens often have an expiration time; It’s crucial to prompt users to reauthenticate when their token expires. Additionally, you should consider implementing token revocation mechanisms in case a user’s token needs to be invalidated before it expires naturally.

Keep security best practices in mind. Best practices include protecting the private keys associated with token validation and securely transmitting tokens over HTTPS. Also consider additional security measures like rate limiting and request throttling.

The Advantages of Decentralized ID Tokens for Go APIs

In addition to the benefits already mentioned — enhanced security, greater privacy and control for users, reduced dependency on third-party or centralized authorities — decentralized identity tokens bring the following benefits when used in Go APIs: decentralized identity tokens bring:

Interoperability. Decentralized identity frameworks aim to establish interoperable standards, enabling seamless integration across different platforms, services, and organizations. This interoperability simplifies the implementation and adoption of decentralized identity tokens in Go APIs, promoting compatibility and consistency in identity-related interactions.

Future-proofing. The rise of decentralized identity represents an evolving landscape in digital identity management. By incorporating decentralized identity tokens in Go APIs, developers can future-proof their applications and be prepared for the increasing adoption of decentralized identity frameworks and technologies.

Developer flexibility. Decentralized identity tokens provide developers with flexibility in choosing and integrating with different frameworks that align with their requirements. This flexibility allows developers to leverage the benefits of decentralized identity while tailoring the implementation to their specific use cases and preferences.

Introduction to JWT and Its Key Components

Decentralized identity token standards, such as JSON Web Tokens (JWT), provide a structured format for representing and exchanging identity information in a secure and verifiable manner. JWT is a widely adopted standard for creating self-contained tokens that can be used to assert claims about the identity and access rights of a user.

JWT  is an open standard (RFC 7519) that defines a compact and self-contained way to transmit information between parties as a JSON object. It consists of three parts: header, payload and signature.

The header of a JWT contains metadata about the token, such as the token type and the cryptographic algorithm used to sign the token. It is encoded in Base64Url format and is part of the token itself.

The payload contains the claims or statements about the identity of the user and additional data. Claims can include information like the user’s identity, roles, permissions, expiration time or any custom data required for authentication and authorization. The payload is also Base64Url encoded.

The signature is created by combining the encoded header, payload, and a secret or private key known only to the issuer. It ensures the integrity of the token and verifies that it hasn’t been tampered with. Verifying the signature with the corresponding public key allows the recipient to validate the authenticity of the token.

JWT-based tokens have gained popularity due to their simplicity, flexibility and compatibility across different platforms and programming languages. They can be used in various use cases, such as single sign-on, secure API authentication, and authorization in distributed systems.

Note that while JWT is a widely used token format, it is just one example of a decentralized identity token standard. Other frameworks might have their own token formats and standards, depending on the specific technology and ecosystem being used.

Other Popular Decentralized Identity Frameworks

uPort

uPort is a decentralized identity platform built on the Ethereum blockchain. It allows individuals to create and manage their identities, control their data and interact securely with decentralized applications. uPort provides a JavaScript library that can be used in conjunction with Go APIs to integrate decentralized identity features. Since it was introduced in 2015, uPort has since evolved into two separate projects: Veramo and Serto.

Sovrin

Sovrin is a decentralized identity network that leverages blockchain technology. It aims to provide self-sovereign identity capabilities with privacy and security. Sovrin uses a combination of distributed ledger technology and cryptographic techniques. Integration with Go APIs can be achieved through the Sovrin client libraries and SDKs.

Hyperledger Indy

Hyperledger Indy is an open source project under The Linux Foundation that focuses on decentralized identity and verifiable claims. It provides a framework for building self-sovereign identity systems and allows individuals to manage their identities and control the release of their personal information. Indy SDK offers Go language bindings that enable integration with Go APIs.

SelfKey

SelfKey is a decentralized identity and digital asset management platform. It allows users to control their identity attributes and manage their digital identity securely. SelfKey provides a range of developer tools, including APIs and SDKs, to integrate with Go APIs and implement decentralized identity functionality.

Integrating these frameworks with Go APIs typically involves using the provided libraries, SDKs or client APIs. These tools often offer methods for token generation, validation and access to decentralized identity features. Developers can leverage these resources to implement authentication and authorization mechanisms using tokens within their Go applications.

When integrating decentralized identity frameworks with Go, refer to the respective documentation and resources provided by the framework of choice. These resources often include code examples, tutorials and reference documentation that guide developers through the integration process and help them make the most of the decentralized identity capabilities within their Go APIs.

How to Generate and Issue Decentralized Identity Tokens

1. Choose a Decentralized Identity Framework.

Select a framework that aligns with your project’s requirements and integrates well with your technology stack. Examples include the aforementioned uPort, Sovrin, Hyperledger Indy or SelfKey. Refer to the documentation and resources provided by the chosen framework to understand their token issuance process.

2. Set up the Required Infrastructure.

Set up the necessary infrastructure to support token generation and issuance. This typically includes deploying or connecting to the decentralized identity framework’s network or blockchain. Follow the documentation provided by the framework for detailed instructions on infrastructure setup.

3. Implement a User Registration and Identity Verification Process.

This step ensures that the identities associated with the decentralized identity tokens are valid and trustworthy. Verification methods can include email verification, government-issued ID verification, or other mechanisms based on the requirements of your application.

4. Generate a Decentralized Identity Token.

Once the user’s identity is verified, generate a decentralized identity token for the user. The specific steps for generating a token will depend on the chosen decentralized identity framework. Typically, you’ll need to call the framework’s API or use their provided libraries to create a token with the required claims and data.

5. Embed User Claims and Information.

In the generated decentralized identity token, include the relevant claims and information about the user. Claims can include the user’s identity, roles, permissions or any additional data required for authentication and authorization within your application. Ensure that the token payload accurately represents the user’s verified identity and associated attributes.

6. Sign the Token.

Sign the token with a private key or secret known only to the issuer. This step ensures the integrity of the token and allows recipients to verify its authenticity. Follow the framework’s documentation for guidance on signing the token using the appropriate cryptographic algorithms and keys.

7. Deliver the Token to the User.

Provide the generated decentralized identity token to the user. The delivery method can vary based on your application’s architecture and requirements. You may send the token as a response after successful registration, store it securely on the user’s device, or utilize a token delivery mechanism provided by the chosen decentralized identity framework.

8. Valid the Token and Its Usage.

In your Go APIs, implement token validation logic to ensure that only valid and authentic decentralized identity tokens are accepted. Use the appropriate libraries or packages for JWT validation in Go, such as github.com/dgrijalva/jwt-go. Validate the token’s signature, expiration and other relevant claims to authenticate and authorize the user’s access to API resources.

Registering and Verifying User Identities

The registration process and verification of user identities help ensure that only legitimate users receive the tokens. Here’s how a typical registration and identity verification process should work:

User registration. Users typically provide their basic information in order to register, such as name, email address and username, through a registration form or user interface.

Identity information collection. During registration, you may collect additional identity information depending on the requirements of your application. This can include personal details, contact information, and any other data necessary to establish the user’s identity. You should clearly communicate the purpose and use of this information to the user.

Verification Methods

To validate the user’s identity, you’ll need to implement one or more verification methods. Common methods include:

  • Email verification. Send a verification email to the user’s provided email address with a unique verification link. When the user clicks the link, it confirms that the email address is valid and accessible.
  • Document verification. Request users to provide government-issued identification documents, such as a passport or driver’s license, for verification. This process may involve manual or automated checks to ensure the authenticity of the documents.
  • Two-factor authentication (2FA). Implement a second layer of authentication, such as SMS verification or app-based authentication, to confirm the user’s identity.
  • Social media verification. Integrate with social media platforms to validate the user’s identity by verifying their accounts on platforms like Facebook, LinkedIn or Twitter.

Choose the verification methods that align with your application’s requirements, level of assurance needed and the sensitivity of the user’s data.

Verification process. Once the user provides the necessary information and selects the desired verification method(s), initiate the verification process. This involves validating the information provided and conducting the necessary checks based on the chosen verification method.

Identity confirmation. Upon successful verification, notify the user that their identity has been confirmed. Provide clear instructions on the next steps, including how to access the decentralized identity token associated with their verified identity.

Token generation and delivery. After confirming the user’s identity, generate the decentralized identity token with the relevant user claims and information. Follow the token generation steps specific to your chosen decentralized identity framework, as outlined in the earlier steps. Deliver the token securely to the user, whether it’s through an API response, email, or any other secure mechanism.

The registration process and identity verification should be designed to strike a balance between security, user experience and privacy considerations. Ensure that you handle user data responsibly, follow data protection regulations, and provide transparent communication regarding the storage and usage of user information.

To implement token validation logic in Go using the github.com/golang-jwt/jwt library, you can follow these steps:

Install the Library.

Use the following command to install the jwt-go library:

go get -u github.com/golang-jwt/jwt/v5

Import the Required Packages.

Import the necessary packages in your Go code:

import (
    "github.com/golang-jwt/jwt/v5"
    "fmt"
)

Define a Validation Function.

Create a function that takes the token string as input and returns an error if the token is invalid:

func validateToken(tokenString string) error {
    token, err := jwt.Parse(tokenString, func(token *jwt.Token) (interface{}, error) {
        // Provide the key or method to retrieve the key to validate the token
        // Example: return []byte("your-secret-key"), nil
        return nil, fmt.Errorf("unexpected signing method")
    })


    if err != nil {
        return err
    }


    if !token.Valid {
        return fmt.Errorf("token is invalid")
    }


    return nil
}

Implement Token Validation.

In your API handler or middleware, call the validateToken function with the token string to validate the token:

func YourHandler(w http.ResponseWriter, r *http.Request) {
  tokenString := r.Header.Get("Authorization")
  if tokenString == "" {
      // Handle missing token
      return
  }

     // Extract the token from the "Bearer <token>" format
     tokenString = strings.Replace(tokenString, "Bearer ", "", 1)

     err := validateToken(tokenString)
     if err != nil {
        // Handle invalid token
        return
     }

// Token is valid, proceed with handling the request
// ...
}


In the validateToken function, you can customize the key retrieval method based on your token signing approach. For example, if you are using a symmetric signing method, you can return the secret key directly.

If you are using an asymmetric signing method, you may need to retrieve the public key based on the token’s signing key ID (kid) or use a key retrieval service.

Make sure to handle errors appropriately and define the logic for handling different validation scenarios based on your application’s requirements.

The code provided offers a basic structure for token validation using the jwt-go library. You may need to adapt it to your specific use case, including handling additional token claims, expiration checks and custom validation logic.

Remember to refer to the jwt-go library documentation for detailed information on using the library and exploring its advanced features and capabilities.

Verifying a Token’s Digital Signature

The process of verifying the token’s signature using public keys ensures that the token has been signed by the appropriate private key and has not been modified since its creation. Depending on your requirements, you may need to perform additional verification steps beyond the ones described here, such as checking the token’s expiration time, issuer (iss), audience (aud), or other custom claims.

Also, the exact implementation of the signature verification process may vary depending on the chosen library or cryptographic algorithms used.

The process described here provides a general understanding of the token signature verification concept using public keys. Always refer to the documentation and guidelines provided by the specific library or framework you are using for accurate implementation details.

The process of verifying a token’s signature typically involves the following steps:

Understand the Token Structure.

A token consists of three parts: the header, payload, and signature. The header contains metadata about the token, the payload contains the claims, and the signature ensures the integrity of the token.

Obtain the Public Key.

Retrieve the corresponding public key associated with the token’s signing algorithm. The public key can be obtained from a trusted source, such as a key management system, a certificate authority or a key distribution mechanism.

Verify the Signature.

Use the obtained public key to verify the token’s signature. The process involves the following steps

  • Extract the algorithm and signing key identifier (kid) from the token’s header.
  • Retrieve the public key corresponding to the signing key identifier (kid).
  • Verify the signature using the public key and the algorithm specified in the header.
  • If the signature verification is successful, it indicates that the token has not been tampered with and is authentic.

Code Example for Token Validation

import (
  "github.com/golang-jwt/jwt/v5”
   "time"
)

func validateToken(tokenString string, publicKey *rsa.PublicKey) (*jwt.Token, error) {
  token, err := jwt.Parse(tokenString, func(token *jwt.Token) (interface{}, error) {
    if _, ok := token.Method.(*jwt.SigningMethodRSA); !ok {
       return nil, fmt.Errorf("unexpected signing method: %v", token.Header["alg"])
    }
    return publicKey, nil

  })


  if err != nil {
     return nil, err
  }


  if !token.Valid {
      return nil, fmt.Errorf("token is invalid")
  }


  return token, nil
}

Best Practices for Securely Validating ID Tokens in Go

  • Always use a secure key storage mechanism. Store and manage the private and public keys securely. Avoid hardcoding or exposing the keys in the codebase. Use secure key management systems or encryption mechanisms to protect the keys.
  • Validate the token signature algorithm. Verify that the token’s signature algorithm matches the expected algorithm. Only accept tokens signed with trusted algorithms, such as RSA or ECDSA.
  • Verify the token’s expiration time. Check the token’s expiration time (exp) to ensure it has not expired. Reject tokens that have exceeded their expiration time. You can use the time.Now() function to compare the token’s expiration time with the current time.
  • Validate the token issuer and audience. Check the token’s issuer (iss) and audience (aud) claims, if applicable, to ensure they match the expected values. Reject tokens that are not issued by a trusted issuer or intended for the expected audience.
  • Implement additional custom claim checks. If your tokens contain custom claims, implement additional checks to validate those claims according to your application’s requirements. For example, you can check the user’s role, permissions or any other specific claims relevant to your authorization logic.
  • Handle token revocation. Consider implementing a token revocation mechanism if the framework or system you’re using supports it. This allows you to invalidate tokens in case of compromise, logout, or other scenarios where token revocation is necessary.
  • Apply rate limiting and throttling. Implement rate limiting and request throttling mechanisms to prevent brute-force attacks or excessive token validation requests. This helps protect your token validation endpoint from abuse.
  • Implement logging and monitoring. Log token validation activities and errors for auditing purposes. Monitor token validation performance and errors to identify any potential issues or anomalies.
  • Keep libraries and dependencies up-to-date. Regularly update the jwt-go library and other dependencies to ensure you have the latest security patches and bug fixes.

These code examples and best practices provide a starting point for securely validating decentralized identity tokens in Go. Customize them based on your specific requirements and security considerations, and refer to the jwt-go library documentation for further details and advanced usage.

The post Secure Go APIs with Decentralized Identity Tokens, Part 1 appeared first on The New Stack.

]]>
Address High Scale Google Drive Data Exposure with Bulk Remediation https://thenewstack.io/address-high-scale-google-drive-data-exposure-with-bulk-remediation/ Tue, 26 Sep 2023 17:00:16 +0000 https://thenewstack.io/?p=22717854

Millions of organizations around the globe use SaaS applications like Google Drive to store and exchange company files internally and

The post Address High Scale Google Drive Data Exposure with Bulk Remediation appeared first on The New Stack.

]]>

Millions of organizations around the globe use SaaS applications like Google Drive to store and exchange company files internally and externally. Because of the collaborative nature of these applications, company files can be accessed easily by the public, held externally with vendors, or shared within private emails. Data risk exposure exponentially increases as companies scale operations and internal data. Shared files through SaaS applications like Google Drive enable significant business-critical data exposure that could potentially get into the wrong hands.

As technology companies experience mass layoffs, IT professionals should take extra caution when managing shared file permissions. For example, if a company recently laid off an employee that shared work files externally with their private email, the former employee will still have access to the data. Moreover, if the previous employee begins working for a competitor, they can share sensitive company files, reports and data with their new employer. Usually, once internal links are publicly shared with an external source, the owner of the file is unable to see who else has access. This poses an enormous security risk for organizations as anyone, including bad actors or competitors, can easily steal personal or proprietary information within the shared documents.

Digitization and Widespread SaaS Adoption

Smaller, private companies tend to underestimate their risk of data exposure when externally sharing files. An organization is still at risk even if they only have a small number of employees. On average, one employee creates 50 new SaaS assets every week. It only takes one publicly-shared asset to expose private company data.

The growing adoption of SaaS applications and digital transformation are exacerbating this problem. In today’s digital age, companies are becoming more digitized and shifting from on-premises or legacy systems to the cloud. Within 24 months, a typical business’s total SaaS assets will multiply by four times. As organizations grow and scale, the amount of SaaS data and events becomes uncontrollable for security teams to maintain. Without the proper controls and automation in place, businesses are leaving a massive hole in their cloud security infrastructure that only worsens as time goes on. The longer they wait to tackle this challenge, the harder it becomes to truly gain confidence in their SaaS security posture.

Pros and Cons of Bulk Remediating

Organizations looking to protect themselves from this risk should look to bulk remediate their data security. By bulk remediating, IT leaders can quickly ensure a large amount of sensitive company files remain private and are unable to be accessed by third parties without explicit permission. This is a quick way to guarantee data security as organizations scale and become digitized.

However, as an organization grows, they will likely retain more employees, vendors, and shared drives. When attempting to remediate inherited permissions for multiple files, administrators face the difficulty of ensuring accurate and appropriate access levels for each file and user. It requires meticulous planning and a thorough understanding of the existing permission structure to avoid unintended consequences.

Coordinating and executing bulk remediation actions can also be time-consuming and resource-intensive, particularly when dealing with shared drives that contain a vast amount of files and multiple cloud, developer, security, and IT teams with diverse access requirements. The process becomes even more intricate when trying to strike a balance between minimizing disruption to users’ workflows and enforcing proper data security measures.

Managing SaaS Data Security

Organizations looking to manage their SaaS data security should first understand their current risk exposure and the number of applications currently used within the company. This will help IT professionals gain a better understanding of which files to prioritize that contain sensitive information that needs to quickly be remediated. Next, IT leaders should look for an automated and flexible bulk remediation solution to help them quickly manage complex file permissions as the company grows.

Companies should ensure they are only using SaaS applications that are up to their specific security standards. This is crucial to not only avoid data exposure, but also comply with business compliance regulations. IT admins should reassess each quarter their overall data posture and whether current SaaS applications are properly securing their private assets. Automation workflows within specific bulk remediation plans should be continuously updated to ensure companies are not missing security blind spots.

Each organization has different standards and policies that they will determine as best practices to keep their internal data safe. As the world becomes increasingly digital and the demand for SaaS applications exponentially grows, it is important for businesses to ensure they are not leaving their sensitive data exposed to third parties. Those that fail to remediate their SaaS security might be the next victim of a significant data breach.

The post Address High Scale Google Drive Data Exposure with Bulk Remediation appeared first on The New Stack.

]]>
3 Tips to Secure Your Cloud Infrastructure and Workloads https://thenewstack.io/3-tips-to-secure-your-cloud-infrastructure-and-workloads/ Fri, 22 Sep 2023 19:05:37 +0000 https://thenewstack.io/?p=22705699

As companies move to the cloud for benefits like efficiency and scalability, it is the job of security teams to

The post 3 Tips to Secure Your Cloud Infrastructure and Workloads appeared first on The New Stack.

]]>

As companies move to the cloud for benefits like efficiency and scalability, it is the job of security teams to enable them to do so safely.

In this reality, it is vital that IT leaders understand how threat actors are targeting their cloud infrastructure. As one might suspect, attackers first go after low-hanging fruit — the systems and applications that are the easiest to exploit.

In the 2023 CrowdStrike Global Threat Report, our researchers noted that adversaries:

  • Target neglected cloud infrastructure slated for retirement that still contains sensitive data.
  • Use a lack of outbound restrictions and workload protection to exfiltrate data.
  • Leverage common cloud services as a way to obfuscate malicious activity.

Neglected or Misconfigured Cloud Infrastructure

Neglected and soon-to-be-retired infrastructure are prime targets for attackers, often because that infrastructure no longer receives security configuration updates and regular maintenance. Security controls such as monitoring, expanded logging, security architecture and planning, and posture management no longer exist for these assets.

Lack of Outbound Restrictions and Container Life Cycle Security

Unfortunately, we still see cases where neglected cloud infrastructure still contains critical business data and systems. As such, attacks led to sensitive data leaks requiring costly investigation and reporting obligations. Additionally, some attacks on abandoned cloud environments resulted in impactful service outages, since they still provided critical services that hadn’t been fully transitioned to new infrastructure. Moreover, the triage, containment and recovery from the incident in these environments had a tremendous negative impact on some organizations.

Launching Attacks from the Cloud

Not only are attackers targeting cloud infrastructure, but we also observed threat actors leveraging the cloud to make their attacks more effective. Over the past year, threat actors used well-known cloud services, such as Microsoft Azure, and data storage syncing services, such as MEGA, to exfiltrate data and proxy network traffic. A lack of outbound restrictions combined with a lack of workload protection allowed threat actors to interact with local services over proxies to IP addresses in the cloud. This gave attackers additional time to interrogate systems and exfiltrate data from services ranging from partner-operated, web-based APIs to databases — all while appearing to originate from inside victims’ networks. These tactics allowed attackers to dodge detection by barely leaving a trace on local file systems.

So How Do I Protect My Cloud Environment?

The cloud introduces new wrinkles to proper protection that don’t all translate exactly from a traditional on-premises data center model. Security teams should keep the following firmly in mind as they strive to remain grounded in best practices.

  • Enable runtime protection to obtain real-time visibility. You can’t protect what you don’t have visibility into, even if you have plans to decommission the infrastructure. Central to securing your cloud infrastructure to prevent a breach is runtime protection and visibility provided by cloud workload protection (CWP). It remains critical to protect your workloads with next-generation endpoint protection, including servers, workstations and mobile devices, regardless of whether they reside in an on-premises data center, virtual cluster or hosted in the cloud.
  • Eliminate configuration errors. The most common root cause of cloud intrusions continues to be human errors and omissions introduced during common administrative activities. It’s important to set up new infrastructure with default patterns that make secure operations easy to adopt. One way to do this is to use a cloud account factory to create new sub-accounts and subscriptions easily. This strategy ensures that new accounts are set up in a predictable manner, eliminating common sources of human error. Also, make sure to set up roles and network security groups that keep developers and operators from needing to build their own security profiles and accidentally doing it poorly.
  • Leverage a cloud security posture management (CSPM) solution. Ensure your cloud account factory includes enabling detailed logging and a CSPM — like the security posture included in CrowdStrike Falcon Cloud Security — with alerting to responsible parties including cloud operations and security operations center (SOC) teams. Actively seek out unmanaged cloud subscriptions, and when found, don’t assume it’s managed by someone else. Instead, ensure that responsible parties are identified and motivated to either decommission any shadow IT cloud environments or bring them under full management along with your CSPM. Then use your CSPM on all infrastructure up until the day the account or subscription is fully decommissioned to ensure that operations teams have continuous visibility.

Because the cloud is dynamic, so too must be the tools used to secure it. The visibility needed to see the type of attack that traverses from an endpoint to different cloud services is not possible with siloed security products that only focus on a specific niche. However, with a comprehensive approach rooted in visibility, threat intelligence and threat detection, organizations can give themselves the best opportunity to leverage the cloud without sacrificing security.

The post 3 Tips to Secure Your Cloud Infrastructure and Workloads appeared first on The New Stack.

]]>
OAuth.Tools: The Online Tool That Goes beyond JWTs https://thenewstack.io/oauth-tools-the-online-tool-that-goes-beyond-jwts/ Fri, 22 Sep 2023 15:35:39 +0000 https://thenewstack.io/?p=22718739

JSON Web Tokens (JWTs) are powerful and convenient tools for securing APIs. Their format is standardized; they are cryptographically protected,

The post OAuth.Tools: The Online Tool That Goes beyond JWTs appeared first on The New Stack.

]]>

JSON Web Tokens (JWTs) are powerful and convenient tools for securing APIs. Their format is standardized; they are cryptographically protected, self-contained and simply very handy. Since JWTs are commonly unencrypted, you can easily parse them and inspect their structure and content. You can use convenient online tools for that purpose. Let me share a personal tip: Check out OAuth.Tools.

OAuth.Tools is a free online tool provided by Curity. It offers incredible features for anyone working with or interested in OAuth and OpenID Connect. These protocols are commonly used to “outsource login.” OAuth is much more than just the access token. As such, you can use OAuth.Tools to decode or create JWTs with different characteristics, fetch tokens from a server, revoke tokens or add an access token to external API calls and check the behavior.

Configuration

The Curity Playground

OAuth.Tools provides a preconfigured environment and workspace, the Curity Playground. As the name suggests, you can use that to play around with the tool. It also includes some examples that you can easily run without the need for any configuration or installation.

The examples demonstrate how to use various flows and provide a quick start. For example, to decode a JWT, simply click “Demo: JWT Token” and copy and paste the value in the code field. For other flows, the Curity Playground is configured with demo clients. To fetch a token, for example, you can try out the “Demo: Code Flow” configured with a client. You can find the clients in the workspace settings. They enable anyone to run an OAuth flow quickly without prior knowledge or manual configuration.

Customize Settings

You may use OAuth.Tools with any OAuth-compliant server. The only requirement is that the OAuth services must be accessible over the internet.

From the main menu, you can create, import, export or share configurations via links. In that way, the workspace configuration becomes portable. You can return to it at another point or share your work with a colleague.

A workspace represents the integration with a service, such as the Curity Playground. In the settings, you specify the different URLs and endpoints for the service. Preferably, you use a discovery service like the OIDC service discovery or WebFinger to automatically retrieve some of the settings. You then only have to enter the client details you received from the service provider. Refer to the service provider’s documentation for how to register an OAuth client (sometimes referred to as app registration).

Features

Great Overview

Next to the main menu is the list of flows. A flow is basically a task, like a request or a series of requests (in the case of the code flow, for example). “Decode JWT,” “Create JWT” and OAuth-related requests like “Code Flow” or “Client Credentials Flow” are all examples of tasks that OAuth.Tools supports. Even new features like verifiable credential issuance (VCI) are supported. You can organize your work by grouping flows.

The main window shows two panes. The left pane is the configuration pane where you enter the flow details. The right pane shows the result — the body of a request. For example, when decoding JWTs, you enter the encoded JWT in the left pane and OAuth.Tools lists the details in the right pane.

Helpful Insights

Both panes in the main window provide very useful information. For example, OAuth.Tools highlights the different parts of a JWT — the header, the payload (data) and the signature — in the input field. If you provide a signature verification key next to the JWT, OAuth.Tools validates its signature and prints a nice green box in the result pane.

When validating JWTs, you can also select a type and OAuth.Tools will let you know if the provided JWT meets the requirements for that type. For example, if a JWT is supposed to be an access token, it should contain an aud and scope claim. OAuth.Tools displays a warning if those claims are missing. With that feature, you can parse a JWT, validate its signature and quickly verify that it also complies with standards and best practices.

OAuth.Tools provides helpful information in many cases beyond JWT decoding. When creating a flow, it allows you to set common settings using UI elements. For example, it allows you to enable the PKCE (proof key for code exchange) or create a signed request (JWT Secured Authorization Request) with single switches. OAuth.Tools is educational, as it does not require much knowledge about the protocol, but you will eventually gain some.

The strength of OAuth.Tools lie within the details. If available, OAuth.Tools lists request details like all the query parameters. For example, the code flow is a two-step flow where the first step starts in the front channel, the browser. OAuth.Tools shows how the browser receives an authorization code and allows you to swap it for tokens. You may copy and paste requests and run them in the browser or terminal instead. In addition, OAuth.Tools prints server responses — consequently, OAuth.Tools is handy for testing and debugging OAuth and OpenID Connect integrations.

Try It Out

What I like about OAuth.Tools is its completeness. Not only does it support many flows, but it also provides the necessary supporting features. For example, whenever a key is required, you can simply press a button to create one. Also, when a flow requires a token, you can select an appropriate one that comes from another flow. This means you can create a code flow to get a token and run an introspection flow to list its details. There are even shortcuts for that!

Whatever your business with OAuth is, whether you are an experienced user or a novice, you should try out OAuth.Tools.

The post OAuth.Tools: The Online Tool That Goes beyond JWTs appeared first on The New Stack.

]]>
What Is Infrastructure as Code Scanning? https://thenewstack.io/what-is-infrastructure-as-code-scanning/ Thu, 21 Sep 2023 13:10:41 +0000 https://thenewstack.io/?p=22718626

Infrastructure as Code, or IaC, is something that tends to excite DevOps teams and security teams alike. For DevOps, IaC

The post What Is Infrastructure as Code Scanning? appeared first on The New Stack.

]]>

Infrastructure as Code, or IaC, is something that tends to excite DevOps teams and security teams alike. For DevOps, IaC provides a means of automating and scaling processes that would take a long time to complete manually. And from a security perspective, IaC offers the benefit of reducing the chances that engineers will introduce security risks into IT environments through manual configuration oversights or errors.

That said, IaC only makes IT environments more secure if your IaC code itself is secure. Problems in IaC code can easily become the weakest link in your security strategy if you don’t identify them before putting the code to use.

That’s why having an IaC scanning strategy in place is critical for ensuring that developers, DevOps engineers and anyone else who takes advantage of IaC can do so without undercutting security priorities. Keep reading for an overview of why IaC scanning is important, how it works and how to leverage it to maximum effect.

What Is IaC?

IaC is the use of code to manage IT infrastructure provisioning and configuration. When you use IaC, you write code that defines how you want a resource to be provisioned. You then use an IaC platform (such as Terraform or Ansible, to name just a couple popular IaC tools) that automatically applies that configuration to the resources you specify.

In this way, IaC saves engineers a lot of time because it allows them to apply the same configuration to as many resources as they want automatically. IaC also reduces the risk of configuration errors that could occur if engineers were setting up each resource by hand and accidentally applied the wrong settings in some instances.

What Is IaC Scanning?

IaC scanning is the use of automated tools to validate the IaC configuration files. In other words, when you perform IaC scanning, you scan the IaC code that defines how you want resources to be configured. The IaC scanners can detect potential mistakes or security issues that lie within the code.

IaC scanning goes hand in hand with the concept of shift-left security, which means performing security checks as early as possible in the software delivery life cycle. With IaC scanning, you can easily validate whether your planned configurations are secure before you apply them. In that way, you can detect security risks earlier in the software delivery process, before the configurations are deployed.

Why Is IaC Scanning Important?

IaC scanning is important because mistakes or oversights that exist in IaC code will be repeated across the resources to which you apply the code. By scanning your IaC code before applying it, you can catch and resolve problems before they affect live resources.

As an example of how IaC scanning can benefit an organization, imagine you wrote the following IaC code to deploy a containerized application using Terraform:

 

resource "docker_container" "my_container" {

name = "my_container"
image = "my_image"
command = "bash"
privileged = true
user = "root"
}


This code configures a container to run in privileged mode as the root user. Terraform won’t stop you from running a container in this way, but doing so presents a security risk. If your container runs as root, attackers who manage to compromise the container can more easily escalate the attack to take control of the host operating system and any other containers running on the system.

For this reason, most IaC scanners would flag this configuration and warn you of the potential dangers. You could then modify your code so that your containers do not run in privileged mode when you deploy them based on this code.

IaC scanning can also help to detect configuration errors, such as misconfigured file paths or user parameters, that might cause resources not to run properly. However, the main benefit of IaC scanning is that it helps protect against security risks.

Best Practices for Choosing an IaC Scanning Solution

There are a number of IaC scanners on the market today. When choosing from the various options, look for an IaC scanning tool that delivers the following capabilities:

  • Broad IaC framework support: Ideally, your IaC scanner will be able to validate IaC code written for any IaC framework — Terraform, Ansible, CloudFormation and so on — rather than only supporting one or two types of IaC frameworks.
  • CI/CD integration: The most efficient IaC scanners integrate with CI/CD tooling so that scans happen as an integral part of the software delivery process.
  • Comprehensive risk detection: The errors that can exist in IaC code come in many forms. The best IaC scanners are capable of detecting a wide range of problems — from vulnerable dependencies, to access control misconfigurations, to typos that might cause security policies not to be applied properly, and beyond.
  • Risk prioritization— Not all IaC security risks are of equal severity. A good IaC scanner will assess each risk it discovers and highlight those that pose the greatest threat so that you know which ones to tackle first.

Conclusion: Using IaC Responsibly

IaC is a powerful tool for accelerating and scaling complex IT processes while also avoiding the risk of security problems triggered by manual configuration oversights.

However, if the code that governs your IaC workflows is insecure, IaC can quickly become a source of security risks rather than a way to mitigate them. Mitigate this challenge by deploying IaC scanners as part of your CI/CD process and leveraging scanning to drive shift-left security.

Want to learn more about how to secure your cloud infrastructure and improve your overall security posture?

Orca Security provides a shift-left approach to security by integrating IaC scanning early in your CI/CD process. The Orca Cloud Security Platform offers a comprehensive solution for diagnosing vulnerabilities, misconfigurations and compliance issues in your cloud environment, providing an all-inclusive view of your risk posture. By identifying and mitigating security risks early in the development cycle, Orca Security helps you achieve shift-left security and reduce the overall risk to your cloud infrastructure.

Request a demo or sign up for a free cloud risk assessment to learn more about how Orca Security can help you secure your cloud infrastructure and improve your overall security posture.

Further Reading

The post What Is Infrastructure as Code Scanning? appeared first on The New Stack.

]]>
The 6 Pillars of Platform Engineering: Part 1 — Security https://thenewstack.io/the-6-pillars-of-platform-engineering-part-1-security/ Wed, 20 Sep 2023 18:00:10 +0000 https://thenewstack.io/?p=22718618

Platform engineering is the discipline of designing and building toolchains and workflows that enable self-service capabilities for software engineering teams.

The post The 6 Pillars of Platform Engineering: Part 1 — Security appeared first on The New Stack.

]]>

Platform engineering is the discipline of designing and building toolchains and workflows that enable self-service capabilities for software engineering teams. These tools and workflows comprise an internal developer platform, which is often referred to as just “a platform.” The goal of a platform team is to increase developer productivity, facilitate more frequent releases, improve application stability, lower security and compliance risks and reduce costs.

This guide outlines the workflows and checklist steps for the six primary technical areas of developer experience in platform engineering. Published in six parts, this part, part one, introduces the series and focuses on security. (Note: You can download a full PDF version of the six pillars of platform engineering for the complete set of guidance, outlines and checklists.)

Platform Engineering Is about Developer Experience

The solutions engineers and architects I work with at HashiCorp have supported many organizations as they scale their cloud operating model through platform teams, and the key for these teams to meet their goals is to provide a satisfying developer experience. We have observed two common themes among companies that deliver great developer experiences:

  1. Standardizing on a set of infrastructure services to reduce friction for developers and operations teams: This empowers a small, centralized group of platform engineers with the right tools to improve the developer experience across the entire organization, with APIs, documentation and advocacy. The goal is to reduce tooling and process fragmentation, resulting in greater core stability for your software delivery systems and environments.
  2. A Platform as a Product practice: Heritage IT projects typically have a finite start and end date. That’s not the case with an internal developer platform. It is never truly finished. Ongoing tasks include backlog management, regular feature releases and roadmap updates to stakeholders. Think in terms of iterative agile development, not big upfront planning like waterfall development.

No platform should be designed in a vacuum. A platform is effective only if developers want to use it. Building and maintaining a platform involves continuous conversations and buy-in from developers (the platform team’s customers) and business stakeholders. This guide functions as a starting point for those conversations by helping platform teams organize their product around six technical elements or “pillars” of the software delivery process along with the general requirements and workflow for each.

The 6 Pillars of Platform Engineering

What are the specific building blocks of a platform strategy? In working with customers in a wide variety of industries, the solutions engineers and architects at HashiCorp have identified six foundational pillars that comprise the majority of platforms, and each one will be addressed in a separate article:

  1. Security
  2. Pipeline (VCS, CI/CD)
  3. Provisioning
  4. Connectivity
  5. Orchestration
  6. Observability

Platform Pillar 1: Security

The first questions developers ask when they start using any system are: “How do I create an account? Where do I set up credentials? How do I get an API key?” Even though version control, continuous integration and infrastructure provisioning are fundamental to getting a platform up and running, security also should be a first concern. An early focus on security promotes a secure-by-default platform experience from the outset.

Historically, many organizations invested in network perimeter-based security, often described as a “castle-and-moat” security approach. As infrastructure becomes increasingly dynamic, however, perimeters become fuzzy and challenging to control without impeding developer velocity.

In response, leading companies are choosing to adopt identity-based security, identity-brokering solutions and modern security workflows, including centralized management of credentials and encryption methodologies. This promotes visibility and consistent auditing practices while reducing operational overhead in an otherwise fragmented solution portfolio.

Leading companies have also adopted “shift-left” security; implementing security controls throughout the software development lifecycle, leading to earlier detection and remediation of potential attack vectors and increased vigilance around control implementations. This approach demands automation-by-default instead of ad-hoc enforcement.

Enabling this kind of DevSecOps mindset requires tooling decisions that support modern identity-driven security. There also needs to be an “as code” implementation paradigm to avoid ascribing and authorizing identity-based on ticket-driven processes. That paves the way for traditional privileged access management (PAM) practices to embrace modern methodologies like just-in-time (JIT) access and zero-trust security.

Identity Brokering

In a cloud operating model approach, humans, applications and services all present an identity that can be authenticated and validated against a central, canonical source. A multi-tenant secrets management and encryption platform along with an identity provider (IdP) can serve as your organization’s identity brokers.

Workflow: Identity Brokering

In practice, a typical identity brokering workflow might look something like this:

  1. Request: A human, application, or service initiates interaction via a request.
  2. Validate: One (or more) identity providers validate the provided identity against one (or more) sources of truth/trust.
  3. Response: An authenticated and authorized validation response is sent to the requestor.

Identity Brokering Requirements Checklist

Successful identity brokering has a number of prerequisites:

  • All humans, applications and services must have a well-defined form of identity.
  • Identities can be validated against a trusted IdP.
  • Identity systems must be interoperable across multi-runtime and multicloud platforms.
  • Identity systems should be centralized or have limited segmentation in order to simplify audit and operational management across environments.
  • Identity and access management (IAM) controls are established for each IdP.
  • Clients (humans, machines and services) must present a valid identity for AuthN and AuthZ).
  • Once verified, access is brokered through deny-by-default policies to minimize impact in the event of a breach.
  • AuthZ review is integrated into the audit process and, ideally, is granted just in time.
    • Audit trails are routinely reviewed to identify excessively broad or unutilized privileges and are retroactively analyzed following threat detection.
    • Historical audit data provides non-repudiation and compliance for data storage requirements.
  • Fragmentation is minimized with a flexible identity brokering system supporting heterogeneous runtimes, including:
    • Platforms (VMware, Microsoft Azure VMs, Kubernetes/OpenShift, etc.)
    • Clients (developers, operators, applications, scripts, etc.)
    • Services (MySQL, MSSQL, Active Directory, LDAP, PKI, etc.)
  • Enterprise support 24/7/365 via a service level agreement (SLA)
  • Configured through automation (infrastructure as code, runbooks)

Access Management: Secrets Management and Encryption

Once identity has been established, clients expect consistent and secure mechanisms to perform the following operations:

  • Retrieving a secret (a credential, password, key, etc.)
  • Brokering access to a secure target
  • Managing secure data (encryption, decryption, hashing, masking, etc.)

These mechanisms should be automatable — requiring as little human intervention as possible after setup — and promote compliant practices. They should also be extensible to ensure future tooling is compatible with these systems.

Workflow: Secrets Management and Encryption

A typical secrets management workflow should follow five steps:

  1. Request: A client (human, application or service) requests a secret.
  2. Validate: The request is validated against an IdP.
  3. Request: A secret request is served if managed by the requested platform. Alternatively:
    1. The platform requests a temporary credential from a third party.
    2. The third-party system responds to the brokered request with a short-lived secret.
  4. Broker response: The initial response passes through an IAM cryptographic barrier for offload or caching.
  5. Client response: The final response is provided back to the requestor.

Secrets management flow

Access Management: Secure Remote Access (Human to Machine)

Human-to-machine access in the traditional castle-and-moat model has always been inefficient. The workflow requires multiple identities, planned intervention for AuthN and AuthZ controls, lifecycle planning for secrets and complex network segmentation planning, which creates a lot of overhead.

While PAM solutions have evolved over the last decade to provide delegated solutions like dynamic SSH key generation, this does not satisfy the broader set of ecosystem requirements, including multi-runtime auditability or cross-platform identity management. Introducing cloud architecture patterns such as ephemeral resources, heterogeneous cloud networking topologies, and JIT identity management further complicates the task for legacy solutions.

A modern solution for remote access addresses the challenges of ephemeral resources and the complexities that arise with ephemeral resources such as dynamic resource registration, identity, access, and secrets. These modern secure remote access tools no longer rely on network access such as VPNs as an initial entry point, CMDBs, bastion hosts, manual SSH and/or secrets managers with check-in/check-out workflows.

Enterprise-level secure remote access tools use a zero-trust model where human users and resources have identities. Users connect directly to these resources. Scoped roles — via dynamic resource registries, controllers, and secrets — are automatically injected into resources, eliminating many manual processes and security risks such as broad, direct network access and long-lived secrets.

Workflow: Secure Remote Access (Human to Machine)

A modern remote infrastructure access workflow for a human user typically follows these eight steps:

  1. Request: A user requests system access.
  2. Validate (human): Identity is validated against the trusted identity broker.
  3. Validate (to machine): Once authenticated, authorization is validated for the target system.
  4. Request: The platform requests a secret (static or short-lived) for the target system.
  5. Inject secret: The platform injects the secret into the target resource.
  6. Broker response: The platform returns a response to the identity broker.
  7. Client response: The platform grants access to the end user.
  8. Access machine/database: The user securely accesses the target resource via a modern secure remote access tool.

Secure remote access flow

Access Management Requirements Checklist

All secrets in a secrets management system should be:

  • Centralized
  • Encrypted in transit and at rest
  • Limited in scoped role and access policy
  • Dynamically generated, when possible
  • Time-bound (i.e., defined time-to-live — TTL)
  • Fully auditable

Secrets management solutions should:

  • Support multi-runtime, multicloud and hybrid-cloud deployments
  • Provide flexible integration options
  • Include a diverse partner ecosystem
  • Embrace zero-touch automation practices (API-driven)
  • Empower developers and delegate implementation decisions within scoped boundaries
  • Be well-documented and commonly used across industries
  • Be accompanied by enterprise support 24/7/365 based on an SLA
  • Support automated configuration (infrastructure as code, runbooks)

Additionally, systems implementing secure remote access practices should:

  • Dynamically register service catalogs
  • Implement an identity-based model
  • Provide multiple forms of authentication capabilities from trusted sources
  • Be configurable as code
  • Be API-enabled and contain internal and/or external workflow capabilities for review and approval processes
  • Enable secrets injection into resources
  • Provide detailed role-based access controls (RBAC)
  • Provide capabilities to record actions, commands, sessions and give a full audit trail
  • Be highly available, multiplatform, multicloud capable for distributed operations, and resilient to operational impact

Stay tuned for our post on the second pillar of platform engineering: version control systems (VCS) and the continuous integration/continuous delivery (CI/CD) pipeline. Or download a full PDF version of the six pillars of platform engineering for the complete set of guidance, outlines and checklists.

The post The 6 Pillars of Platform Engineering: Part 1 — Security appeared first on The New Stack.

]]>
What to Know about Container Security and Digital Payments https://thenewstack.io/what-to-know-about-container-security-and-digital-payments/ Wed, 20 Sep 2023 13:19:03 +0000 https://thenewstack.io/?p=22718587

Managing containers in the world of digital payments just got a little easier. Now that containers are a preferred option

The post What to Know about Container Security and Digital Payments appeared first on The New Stack.

]]>

Managing containers in the world of digital payments just got a little easier. Now that containers are a preferred option for cloud native architectures, practitioners need guidance to support highly regulated industries, such as financial services and payments. While the PCI guidelines for virtual machines (VMs) are still in use and likely will be for many years, they’ve evolved to include container orchestration. Here’s what you need to know.

New guidance for containers and container orchestration was released by the Payment Card Industry (PCI) Security Standards Council last year. These guidelines are important for any business that processes credit card payments and wants to use containers at scale to support its business goals. The new guidance is part of the PCI Data Security Standards (DSS) 4.0 and is similar to that for VMs, but there are important differences. Any business or practitioner working with a company that takes credit card payments can now reference a set of best practices to help them meet the latest PCI DSS requirements when using containers and container orchestration tools.

I participated in working groups that helped update the PCI guidelines. As a qualified security assessor (QSA), I performed PCI audits, payment application audits and penetration tests for many companies. Eventually, I started a company that helps small and midsize organizations that were struggling with the technical requirements PCI imposed on their applications and infrastructure. At that time, most application infrastructure was run on virtual machines, and many companies were struggling to understand the implications on their application infrastructure. That’s why the PCI Virtualization Special Interest Group (SIG) proved to be an invaluable resource for practitioners and large infrastructure teams alike when it published the PCI Data Security Standards Virtualization Guidelines in 2011.

Today, with the popularity of containers among large enterprises, the PCI Special Interest Group (SIG) focusing on container orchestration aims to do the same for a cloud native world. VMware — along with other SIG participants that work with modern orchestration systems and that represent companies from all over the world — created new guidance for using PCI in containers and container orchestration, which you can read about in this blog post.

Like the virtualization guidelines before, the guidance for containers and container orchestration is not a step-by-step guide to achieving PCI DSS compliance. Rather, it’s an overview of unique threats, best practices, and example use cases to help businesses and practitioners better understand the technologies and practices available that can help with PCI compliance when using containers.

Among other things, the guidance includes a list of common threats specific to containerized environments and the best practices to address each threat. Some of the guidelines will sound familiar, because there are some very basic principles and best practices that just make sense regardless of your environment. The threats and best practices are segmented by use cases like baseline, development and management, account data transmission, and containerization in a mixed scope environment. These use cases help practitioners understand the intent of scope where the best practices apply.

The working group also breaks down the best practices into 16 subsections:

  1. Authentication
  2. Authorization
  3. Workload security
  4. Network security
  5. PKI
  6. Secrets management
  7. Container orchestration tool auditing
  8. Container monitoring
  9. Container runtime security
  10. Patching
  11. Resource management
  12. Container image building
  13. Registry
  14. Version management
  15. Configuration management
  16. Segmentation

While some of these are applicable outside of containerized environments and are considered good security hygiene, some areas are specific to containers. Here are the ones I believe are most critical for containerized environments:

  • Workload security – In a containerized environment, the workload is the actual container or application. In virtualized environments, PCI defines everything as a system. In containerized environments, a workload is defined as a smaller unit or instance. When applications are packaged as “containers” that fully encapsulate a minimal operating system layer along with an application runtime (e.g., .NET, Njs and Spring), there are no external dependencies, and all internal dependencies are running at versions required by the application.
  • Container orchestration tool auditing – The main benefit of orchestration tools is automation. The tools can include CI/CD, pipelines, supply chains and even Kubernetes. PCI recognizes that automation is just as critical to the environment as the application and the data, and, as such, the tools you are using must be audited.
  • Container monitoring – Because of the ephemeral nature of containers, container monitoring should not be tied to a specific instance. PCI suggests a secured, centralized log monitoring system that allows us to make better correlations of events across instances of the same container. In addition, PCI suggests monitoring and auditing access to the orchestration system API(s) for indications of unauthorized access.
  • Container runtime security – Just as with its container monitoring, PCI’s recommendations for container runtime security are essentially the same as those for VMs. By calling this out as the container runtime security, PCI is recognizing that containers are unique from VMs and, as such, have their own unique runtime elements.
  • Resource management – Container orchestration capabilities like those found in Cloud Foundry or Kubernetes include resource management as part of the platform. Since it’s common to have workloads (i.e., containers and apps) share clusters and resources, PCI recommends having defined resource limits to reduce the risk of availability issues with workloads in the same cluster. With virtualized environments, this is defined as availability.
  • Container image building – This is perhaps the most pronounced difference between the PCI recommendations for VMs and containers because images are unique to containers. They are what allow us to run a container anywhere. It is also why there is so much we can do with tooling, building and releasing containers, and why the implications on your security posture are profound. Automating the image build lets us set policies to specify which is the trusted (or “golden”) image. Container provenance and dependencies are major concerns for QSAs and security teams. Consider this: A quick search on Docker Hub for Java produces a list of 10,000 images using Java. While some are “official” images, many are images built by unknown people around the world and might not have been updated in a long time, or that potentially include code that could compromise the security of businesses using it.
  • Registry – Registries are required if you are running containers at scale. They function as gatekeepers, enforcing policies around things like image signing and scanning.
  • Version management – Container orchestration systems can run blue-green deploys based on the version management system. This means we can deploy or roll back an upgrade with no effect on our application. Version management should also be used for more than just application changes. Platforms should be automated and have configuration in version control and be treated like any other product that the company might create, operate and manage.

The final section of the orchestration guidance ties together the threats and best practices with practical use cases to help businesses and QSAs apply the provided guidance in real-world scenarios. These use cases provide a diagram and documentation around how a situation is organized, highlighting the threats with the situation and providing a mapping to the previously documented threats and best practices to show how various use cases tie to the best practices shared with the reader.

PCI is a complex process that has many “it depends” scenarios. While the PCI guidance is helpful, this white paper gives a practical example using VMware technologies. You can also reach out to the PCI Security Standards Council or your QSA if you have questions about PCI and your business.

Rita Manachi contributed to this article.

The post What to Know about Container Security and Digital Payments appeared first on The New Stack.

]]>
Whose IP Is It Anyway? AI Code Analysis Can Help https://thenewstack.io/whose-ip-is-it-anyway-ai-code-analysis-can-help/ Tue, 19 Sep 2023 13:31:24 +0000 https://thenewstack.io/?p=22718438

With generative AI tools like OpenAI, ChatGPT and GitHub Copilot flooding the software development space, developers are quickly adopting these

The post Whose IP Is It Anyway? AI Code Analysis Can Help appeared first on The New Stack.

]]>

With generative AI tools like OpenAI, ChatGPT and GitHub Copilot flooding the software development space, developers are quickly adopting these technologies to help automate everyday development tasks. A recent Stack Overflow survey found an overwhelming 70% of its 89,000 respondents are either currently employing AI tools in their development process or are planning to do so in 2023.

In response to the growing AI landscape, new AI tools that can perform code analysis are coming on the market. By enabling developers to analyze code generated by AI tools and identify open source snippets and related license and copyright terms, these tools allow developers to simply provide code blocks or snippets generated by AI tools and receive feedback about whether it matches an open source project, and if so, which license the project is associated with. With this information, teams can have confidence that they are not building and shipping applications that contain someone else’s protected intellectual property.

Synopsys Senior Sales Engineer Frank Tomasello recently hosted a webinar, “Black Duck Snippet Matching and Generative AI Models,” to discuss the rise of AI and how our snippet analysis technology helps protect teams and IP in this uncertain frontier. We touch upon the key webinar takeaways below.

The Risks of AI-Assisted Programming

The good: Fewer resource constraints. The bad: Inherited code with unknown restrictions. The ugly: License conflicts with potential legal implications.

Citing the Stack Overflow survey noted above, Tomasello underscored in the webinar that we are well on our way to adopting an industry-wide shift toward AI-assisted programming. While beneficial from a resource and timing constraint perspective, lazy or insecure use of AI can mean a whole world of trouble.

AI tools like Copilot and ChatGPT function based on learning algorithms that use vast repositories of public and open source code. These models then use the context provided by their users to suggest lines of code to incorporate into proprietary projects. At face value, this is tremendously helpful in speeding up development and minimizing resource limitations. However, given that open source was used to train these tools, it is essential to recognize the possibility that a significant portion of this public code is either copyrighted or subject to more restrictive licensing conditions.

The worst-case scenario is already playing out; earlier this year, GitHub and OpenAI faced groundbreaking class-action lawsuits that claim violations of copyright laws for allowing Copilot and ChatGPT to generate sections of code without providing the necessary credit or attribution to original authors. The fallout from these and inevitable future lawsuits remains to be seen, but the litigation is something that no organization wants to face.

The danger here is therefore not the use of generative AI tools, but the failure to complement their use with tools capable of identifying license conflicts and their potential risk.

The Challenge of Securing AI-Generated Code

We’ve seen over and over the outcomes for failing to adhere to license requirements, long before AI: think Cisco Systems v. the Free Software Foundation in 2008 and Artifex Software v. Hancom in 2017. But the risk remains the same; as AI-assisted software development advances, it’s becoming ever more crucial for companies to remain vigilant about potential copyright violations and maintain strict compliance with the terms of open source licenses.

Business leaders are concerned with implementing AI guardrails and protections, but they often lack a tactical or sustainable approach. Today, most organizations either ignore security needs entirely or take an unsustainably manual approach. The manual approach involves considerable resourcing to maintain — more people, more money and more time to complete. With uncertain economic conditions and limited capacity, organizations are struggling to dedicate the necessary effort for this task. In addition, the complexity of license regulations necessitates a level of expertise and training that organizations likely lack.

Further compounding the issue is the element of human error. It would be unrealistic to expect developers to painstakingly investigate every single license that is mapped to every single open source component and successfully identify all associated licenses, especially given the massive scale of open source usage in modern applications.

What’s required is an automated solution that goes above and beyond common open source discovery methods to help teams simplify and accelerate the compliance aspect of open source usage.

How Synopsys Can Help

While most SCA tools parse files generated by package managers to resolve open source dependencies, that’s not sufficient to identify the IP obligations associated with AI-generated code. This code is usually provided in blocks or snippets that will not be recognized by package managers or included in files like package.json or pom.xml. That’s why you need a tool that goes several steps further in identifying open source dependencies, including conducting snippet analysis.

Synopsys’ Black Duck team offers a snippet analysis tool that does exactly what its name suggests; it analyses source code, and can match snippets as small as a handful of lines to the open source projects where they originated.  Black Duck can provide customers with the license associated with that project and advise on associated risk and obligations. This is all powered by a KnowledgeBase™ of more than 6 million open source projects and over 2,750 unique open source licenses.

Synopsys is now offering a preview of this AI code analysis tool to the public at no cost. This will enable developers to leverage productivity-boosting AI tools without worrying about violating license terms that other SCA tools might overlook.

The post Whose IP Is It Anyway? AI Code Analysis Can Help appeared first on The New Stack.

]]>
The Security Tooling Faceoff — Open Source Security vs. Commercial https://thenewstack.io/the-security-tooling-faceoff-open-source-security-vs-commercial/ Mon, 18 Sep 2023 16:44:16 +0000 https://thenewstack.io/?p=22718411

The shift-left movement has done wonders with advancing many engineering disciplines over the past decade, and none have seen more

The post The Security Tooling Faceoff — Open Source Security vs. Commercial appeared first on The New Stack.

]]>

The shift-left movement has done wonders with advancing many engineering disciplines over the past decade, and none have seen more progress than the security discipline, with regards to shifting actions left of production. One of the first and biggest proponents of shift-left security was Snyk, which came to market with a novel approach to opening pull requests (PRs) inside the developer workflow to remediate CVEs found in open source packages, and we’ve taken this further and spoken about born-left security.

Since first launching its SCA scanner for open source, which was their claim to fame, Snyk has added quite a few tools to its suite to provide more extensive security. In this post, we’ll take a look at how the industry has evolved from a security perspective, and where we still need to improve and level up our developer experience.

What Developers Really Need

Auto-remediation through opening PRs taught the industry a lot about what security experience needs to look like for developers to truly adopt it — we also wrote about this security experience and what developers really want. We have evolved as an industry with the way we expect developers to receive security alerts, and respond to them. We have come a long way in understanding what developers need in order for the tools to be useful. They don’t want fragmented tooling, too many alerts or dashboards, they need to have immense trust in the quality of the results of their security tools, and have the tasks they need to address keep them in context and flow at all times.

Developers have also evolved over the years. There is almost no security-minded engineer today that doesn’t know that for their products and platforms to be secure, they need more than just third-party OSS package scanners. They now know that they need to scan their code, their IaC and cloud environment, their containers and Kubernetes clusters, and even go as far as doing some dynamic web app security testing in runtime. All this, just to achieve the most baseline security when shipping code and products to customers.

Yet, it’s interesting to note that one of the biggest problems in our industry is still fragmentation. That’s because each and every single one of these security capabilities is a company and domain expertise unto itself. It’s extremely difficult to excel in all of these domains simultaneously, and also provide this with a seamless developer experience.

This is where we’re starting to see the world of single-vendor security platforms break down. Today’s security suites from the largest vendors in the industry now seemingly provide this end-to-end suite of capabilities in a single shop, but do they really stack up?

Benchmarking Security Platforms against Open Source Alternatives

When building Jit from the ground up, it was clear to us that this consolidation approach is indeed the right approach to security, but the execution is ultimately what determines if we’ve succeeded as an industry. We had the privilege of doing a lot of research — yes that critical piece in R&D that is often overlooked — when building a truly viable security orchestration platform. We wanted to explore the world of commercial tools vs. open source tools and understand what to provide out of the box for our users to level us up as an industry.

The results were astounding. While we found that companies with a traditional core offering benchmarked well against open source tools, the compelling data shows that additional tools added to the suite to provide greater end-to-end coverage, rarely stacked up (if at all) against best-of-breed open source tools.

Examples of this include tools like Kubescape and KICS for Kubernetes manifest file scanning — Kubesape was superior alongside KICS in second place, ahead of other commercially available tools in the market. This was compounded when adding dedicated rules curated by the Jit team with AI, to enhance detection (which is not possible with closed-source commercial products).

This was also true for code scanning, and even with languages promoted as well-supported like Python. Open source Semgrep performed better across a wide range of programming languages, and particularly the less supported ones in the commercial tooling — despite being widely adopted ones — like Scala.

This just strengthened our working hypothesis that developers should have the freedom to choose the tools in their stack, and not have this dictated by an engagement with a single supplier. Our security cannot be compromised because of the vendor we choose to buy from.

True Shift Left for Developers

As we came to this conclusion, we understood that the best platform we could provide for our users (the developers) would have this QUADfecta of capabilities:

  • The breadth of coverage;
  • Orchestration, unification, and extensibility in a single platform;
  • Auto-remediation in context for developers;
  • Developer experience — DevEx all the things!

When we talk about the breadth of coverage, it’s clear that we can’t compromise or prioritize one part of the stack over the other. We know the code is equally as important as the open source packages, as the cloud configurations to which the software is deployed. This can only be achieved by enabling developers to choose any tool they want — open source or commercial — and also curate and extend a really great set of open source controls out of the box. Which brings us to the next capability.

It’s not enough to just orchestrate a predefined set of tools and leave it at that. There are several aspects at play to truly deliver the experience developers need and expect. First, you need to unify the output and schema — each tool has a different way of reporting and alerting about issues, oftentimes through exhaustive lists of findings (that need to be filtered and prioritized and later assigned as well). Next, you have to be able to support the addition of new tools and coverage all the time in a seamless way and make it possible to bring your own homegrown tools.

Even if you provide a really sleek UI with all the data unified in a human-readable format, with the data prioritized, most developers don’t work this way, in any case. They want the critical alerts to arrive in context and upon other gating and compelling events — like when a PR is created. Our fix-first mindset is built to point you to the exact problematic line of code and offer code fixes and fix guidelines inside PRs, so you don’t have to go hunt findings down later in some UI somewhere.

On top of this, a lot has been invested in making alerting more humane (no more alert or dashboard fatigue!), and enabling developers to only use the UI when they want or need to — and certainly not have to log in to several fragmented UIs to get the full picture of their security posture.  Instead, developers can receive alerts directly in Slack, Github or JIRA, or even send tasks to JIRA to handle later — so security integrates with their existing tooling and workflows and doesn’t require them to leave context to address security.

Swiss Army Knife Security

We’ve come a very very long way in the security industry, shift-left security is well understood, and excellent tooling is now available to cover the many layers of a modern software stack. This makes it possible to now take our security capabilities to the next level, and unlock much-needed developer experience for the security world — security experience. Developers are the new tooling decision-makers, and if we don’t optimize for the humans in the system, we will be setting our security programs up for failure.  By providing a similar experience and approach to tooling that has been available for the DevOps and engineering world for some time, we will enable security to move at the same pace and velocity of high-scale engineering teams.

The post The Security Tooling Faceoff — Open Source Security vs. Commercial appeared first on The New Stack.

]]>
Harden Ubuntu Server to Secure Your Container and Other Deployments https://thenewstack.io/harden-ubuntu-server-to-secure-your-container-and-other-deployments/ Sat, 16 Sep 2023 13:00:48 +0000 https://thenewstack.io/?p=22717712

Ubuntu Server is one of the more popular operating systems used for container deployments. Many admins and DevOps team members

The post Harden Ubuntu Server to Secure Your Container and Other Deployments appeared first on The New Stack.

]]>

Ubuntu Server is one of the more popular operating systems used for container deployments. Many admins and DevOps team members assume if they focus all of their security efforts starting with the container image on up, everything is good to go.

However, if you neglect the operating system on which everything is installed and deployed, you are neglecting one of the most important (and easiest) steps to take.

In that vein, I want to walk you through a few critical tasks you can undertake with Ubuntu Server to make sure the foundation of your deployments is as secure as possible. You’ll be surprised at how easy this is.

Are you ready?

Let’s do this.

Schedule Regular Upgrades

I cannot tell you how many servers I’ve happened upon where the admin (or team of admins) failed to run regular upgrades. This should be an absolute no-brainer but I do understand the reasoning behind the failure to do this. First off, people get busy, so upgrades tend to fall by the wayside in lieu of putting out fires.

Second, when the kernel is upgraded, the server must be rebooted. Given how downtime is frowned upon, it’s understanding why some admins hesitate to run upgrades.

Don’t.

Upgrades are the only way to ensure your server is patched against the latest threats and if you don’t upgrade those servers are vulnerable.

Because of this, find a time when a reboot won’t interrupt service and apply the upgrades then.

Of course, you could also add Ubuntu Livepatch to the system, so patches are automatically downloaded, verified, and applied to the running kernel, without having to reboot.

Do Not Enable Root

Ubuntu ships with the root account disabled. In its place is sudo and I cannot recommend enough that you do not enable and use the root account. By enabling the root account, you open your system(s) up to security risks. You can even go so far as to disable root altogether, with the command:

sudo passwd -l root


What the above command does is expire the root password, so until you were to reset the root password, the root user is effectively inaccessible.

Disable SSH Login for the Root User

The next step you should take is to disable the root user SSH login. By default, Ubuntu Server enables root SSH login, which should be considered a security issue in the waiting. Fortunately, disabling root SSH access is very simple.

Log in to your Ubuntu Server and open the SSH daemon config file with:

sudo nano /etc/ssh/sshd_config


In that file, look for the line:

#PermitRootLogin prohibit-password


Change that to:

PermitRootLogin no


Save and close the file. Restart SSH with:

sudo systemctl restart sshd


The root user will no longer be allowed access via SSH.

Use SSH Key Authentication

Speaking of Secure Shell, you should always use key authentication, as it is much more secure than traditional password-based logins. This process takes a few steps and starts with you creating an SSH key pair on the system(s) that will be used to access the server. You’ll want to do this on any machine that will use SSH to remote into your server.

The first thing to do is generate an SSH key with the command:

ssh-keygen


Follow the prompts and SSH will generate a key pair and save it in ~/.ssh.

Next, copy that key to the server with the command:

ssh-copy-id SERVER


Where SERVER is the IP address of the remote server.

Once the key has been copied, make sure to attempt an SSH login from the local machine to verify it works.

Repeat the above steps on any machine that needs SSH access to the server because we’re not going to disable SSH password authentication. One thing to keep in mind is that, once you disable password authentication, you will only be able to access the server from a machine that has copied its SSH key to the server. Because of this, make sure you have local access to the server in question (just in case).

To disable SSH password authentication, open the SSH demon configuration file again and look for the following lines:

#PubkeyAuthentication yes


and

#PasswordAuthentication yes


Remove the # characters from both lines and change yes to no on the second. Once you’ve done that save and close the file. Restart SSH with:

sudo systemctl restart sshd


Your server will now only accept SSH connections using key authentication.

Install Fail2ban

Speaking of SSH logins, one of the first things you should do with Ubuntu Server is install fail2ban. This system keeps tabs on specific log files to detect unwanted SSH logins. When fail2ban detects an attempt to compromise your system via SSH, it automatically bans the offending IP address.

The fail2ban application can be installed from the standard repositories, using the command:

sudo apt-get install fail2ban -y


Once installed, you’ll need to configure an SSH jail. Create the jail file with:

sudo nano /etc/fail2ban/jail.local


In the file, paste the following contents:

[sshd]
enabled = true
port = 22
filter = sshd
logpath = /var/log/auth.log
maxretry = 3


Restart fail2ban with:

sudo systemctl restart fail2ban


Now, anytime someone attempts to log into your Ubuntu server and fails 3 times, their IP address will be permanently blocked.

Secure Shared Memory

By default, shared memory is mounted as read/write. That means the /run/shm space can be exploited and any application or service that has access to /run/shm. To avoid this, you simply mount /run/shm with certain privileges.

The one caveat to this is you might run into certain applications or services that require read/write access to shared memory. Fortunately, most applications that require such access are GUIs, but that’s not an absolute. So if you find certain applications start behaving improperly, you’ll have to return read/write mounting to shared memory.

To do this, open /etc/fstab for editing with the command:

sudo nano /etc/fstab


At the bottom of the file, add the following line:

tmpfs /run/shm tmpfs defaults,noexec,nosuid 0 0


Save and close the file. Reboot the system with the command:

sudo reboot


Once the system reboots, shared memory is no longer mounted with read/write access.

Enable the Firewall

Uncomplicated Firewall (UFW) is disabled by default. This is not a good idea for production machines. Fortunately, UFW is incredibly easy to use and I highly recommend you enable it immediately.

To enable UFW, issue the command:

sudo ufw enable


The next command you’ll want to run is to allow SSH connections. That command is:

sudo ufw allow ssh


You can then allow other services, as needed, such as HTTP an HTTPS like so:

sudo ufw allow http
sudo ufw allow https


For more information on UFW, make sure to read the man page with the command:

man ufw

Final Thoughts

These are the first (and often most important) steps to hardening Ubuntu Server. You can also take this a bit further with password policies and two-factor authentication but the above steps will go a long way to giving you a solid base to build on.

The post Harden Ubuntu Server to Secure Your Container and Other Deployments appeared first on The New Stack.

]]>
How to Design Scalable SaaS API Security https://thenewstack.io/how-to-design-scalable-saas-api-security/ Fri, 15 Sep 2023 15:50:30 +0000 https://thenewstack.io/?p=22718176

Software as a Service (SaaS) is becoming the norm for many organizations providing digital services. Data is served by APIs

The post How to Design Scalable SaaS API Security appeared first on The New Stack.

]]>

Software as a Service (SaaS) is becoming the norm for many organizations providing digital services. Data is served by APIs and consumed by user-facing applications or made available to partners. Solutions can be rolled out to many tenants using the features of the cloud platform. Customers only need to run apps and do not need to manage any infrastructure.

Yet APIs do not stand alone. They depend on components that provide vital supporting roles to ensure key behaviors in deeper areas, like availability and scalability. One area that must be designed early on is security. For the best results, it is recommended to externalize the complexity of security from application components. Otherwise, things will become significantly more complicated as the number of components and people grow. When designing SaaS solutions, a security-first approach enables the best business outcomes. I’ll explain how below.

Security Requirements

Security must be a key consideration early in any modern architecture design. If security is not taken into account, designs become difficult to retrofit later and may become exposed to threats that could have been mitigated with the proper infrastructure from the onset.

A solution must enable APIs to share data securely to the internet and restrict access according to the organization’s business rules. It also needs to protect against common API threats. This design needs to scale effectively to many APIs as your organization adds products and services.

To integrate with APIs and access secure data, client applications must be able to authenticate users. There are many possible ways to do this, which differ in terms of security and user experience. The client application must then receive an API message credential that identifies the user and locks down backend access based on the user’s identity and business rules.

A key characteristic of the security design is that it must scale effectively as the number of clients, APIs, development teams and business partners grow. This can include dealing with multiple tenants or expanding the business to new markets, where security and legal requirements may differ.

The OAuth 2.0 Authorization Framework

OAuth 2.0 is a family of specifications that map to security use cases for organizations. It also provides the best current capabilities for securely connecting systems. This is done by introducing a central component called the authorization server to manage the lower-level security. End-to-end security flows are then straightforward to integrate into clients and APIs in any technology stack with minimal code.

The basics are that the client runs a code flow, which can authenticate the user in many possible ways. Adding new authentication methods to the client does not require any code changes. The client is then issued a least privilege access token, which locks down its backend access.

APIs receive the access token in an unforgeable JSON Web Token (JWT) format. They cryptographically verify the JWT on every request, and then apply claims-based authorization using straightforward code. This enables a modern zero trust architecture (ZTA) that protects against both internal and external threats.

The more subtle advantage of OAuth 2.0 is its scalability. For example, an organization should follow scope and claims best practices as it adds more APIs. Scopes enable access tokens to flow easily between related APIs while ensuring access is immediately denied if the token is sent to an unrelated API. Similarly, a tenant_id claim could be issued to ensure an immediate forbidden response if a user from one tenant attempts to access resources owned by another tenant.

The authentication and authorization best practices are easy for people to learn. Once understood, the patterns can be applied the same way across many components and the teams working on them, without increasing complexity.

Choosing an Authorization Server

Once the SaaS organization has a clearer idea of how it wants its end-to-end security to work, the next step is to choose an authorization server. When using SaaS for APIs, it is common to lean toward using a SaaS provider for the authorization server. An example might be the cloud provider’s built-in implementation. But choosing an authorization server without sufficient care and attention can lead to expensive rework later.

First, the security behaviors of the product should be reviewed along with its support for OAuth standards. Clients and APIs should be coded in a standards-based way so that security only ever needs minimal rework in the future. Also, pay particular attention to access token design to ensure that the correct scopes and claims can be issued and used across multiple APIs. The correct authentication method(s) must also be supported, and the login user experience should be reviewed.

Next, consider the viewpoint of people. This includes engineers who sometimes need to run end-to-end security flows on their local computers. DevOps teams are often responsible for production deployments to update the authorization server’s configuration settings. In a worst-case scenario, a failed authorization server can result in downtime for digital services. Prepare for this by reviewing the authorization server’s troubleshooting capabilities. Avoid a setup where you rely on support engineers from a third-party SaaS provider to resolve technical issues.

SaaS APIs need supporting components to function reliably at scale. In addition to the authorization server, other key third-party components may be required, such as those for observability. For behaviors that are crucial to the correct backend architecture, aim to use best-of-breed components.

Designing Cloud Deployments

When designing SaaS API architectures, do some early thinking about deployment locations. For instance, the authorization server should always operate next to your APIs for performance reasons. It should also be deployable across multiple regions, each with its own cluster. This should not require duplicating all of the authorization server’s security settings. When high isolation is required, partitioning identity resources by the tenant must also be possible.

Different regions can have varying legal requirements, which may affect the hosting design, such as the choice of cloud provider for that region. The ability to store sensitive data in each user’s home region might be necessary in some business use cases. This can be enabled by issuing a region claim to access tokens and then routing API requests accordingly. This may lead an organization to design a global deployment that includes the following components.

Portability is, therefore, an essential quality to aim for when designing API architectures. This could be managed using serverless or cloud native technologies for the APIs themselves. For supporting components, many best-of-breed implementations are cloud native. They can, therefore, be run identically in any cloud or on premises. Deployment options can vary from simple virtual machines to full-fledged Kubernetes clusters.

A model for deploying anywhere also empowers development teams. They can spin up their own instances wherever needed, including on local workstations.

Conclusion

Some upfront design thinking is recommended when organizations plan their SaaS API architecture. An early focus on security and deployment can prevent blocking issues or expensive rework later. When choosing critical supporting components, avoid taking dependencies on third-party SaaS components since they may limit deployment options and introduce dependencies on third-party people to support the system.

Instead, empower development and DevOps teams with the best setup, deployment and operational features. At Curity, we provide many resources for integrating security while achieving the best all-around architecture. The designs and deployments scale to many components without adding complexity. Regardless of the authorization server you choose, our website has extensive learning resources to help your teams on their identity journey:

The post How to Design Scalable SaaS API Security appeared first on The New Stack.

]]>
DevOps, DevSecOps, and SecDevOps Offer Different Advantages https://thenewstack.io/devops-devsecops-and-secdevops-offer-different-advantages/ Fri, 15 Sep 2023 12:00:14 +0000 https://thenewstack.io/?p=22718196

Within the business of software development, DevOps (Development and Operations) and DevSecOps (Development, Security, and Operations) practices have similarities and

The post DevOps, DevSecOps, and SecDevOps Offer Different Advantages appeared first on The New Stack.

]]>

Within the business of software development, DevOps (Development and Operations) and DevSecOps (Development, Security, and Operations) practices have similarities and differences… and both offer advantages and disadvantages. DevOps offers efficiency and speed while DevSecOps integrates security initiatives into every stage of the software development lifecycle. However, gaining a better view of the DevOps vs. DevSecOps question requires a deeper inspection.

Development Teams Gain an Advantage Through Agility

The similarities and differences between DevOps and DevSecOps begin with Agile project management and the values found within Agile software development. Built around an emphasis on cross-functional teams, successful Agile management depends on the effectiveness of teamwork and the constant integration of customer requirements into the software development cycle. Rather than focus on processes, tools, and volumes of comprehensive documentation, Agile values a development environment that cultivates the adaptability, creativity, and collaboration of the individuals who make up the development and operations teams. Because of the reliance on Agile management, DevOps produces working software that satisfies customer needs.

While traditional approaches to development and testing can result in communication failures and siloed actions, DevOps asks project leads, programmers, testers, and modelers to work smarter as one cohesive unit. In addition, customers serve as important and valued members of DevOps teams through continuous feedback. Melding the development, testing, and operations teams together speeds the process of producing code and, in turn, delivers applications and services to customers at a much faster pace.

Incorporating continuous feedback into the development process creates a quality loop within DevOps. As a result, sustaining quality occurs at each point of the software development cycle. With the needs of the customer driving quality, programmers constantly check for errors in code while adapting to changing customer requests. As the cycle continues, testers measure application functionality against business risks.

Speed, quality, and efficiency grow from the daily integration of testing through Continuous Integration (CI) and Continuous Delivery (CD). Teams can quickly detect integration errors while building, configuring, and packaging software for customers.  The practices come full circle through great opportunities for customers to utilize software and offer feedback.

What Is the Difference Between DevOps and DevSecOps?

DevOps — and the utilization of Agile management principles — establishes the foundation for DevSecOps. Both methodologies utilize the same guiding principles and rely on constant development iterations, continuous integration, continuous delivery, and timely feedback from customers. Even with those similarities in mind, though, the question of “what is the difference between DevOps and DevSecOps?” remains.

When comparing DevOps vs. DevSecOps, the objective shifts from a sole focus on speed and quality to speed, quality, and security. The key difference, though, rests within the placement of security within the development cycle and the need for sharing responsibility for security. Teams working within the DevOps framework incorporate the need for security at the end of the development process.

In contrast, teams working within the DevSecOps framework consider the need for security at each part — from the beginning to the end — of the software development cycle. Because development and operations teams share responsibility, security moves from an add-on to a prominent part of project plans and the development cycle. As a result, DevSecOps mitigates risk within the entire software development process.

Another difference between DevOps and DevSecOps also exists. The definition of quality for DevSecOps moves beyond the needs of the customer and adds security as a key ingredient. Because security integrates into DevSecOps processes from start to finish, the design process includes developers, testers, and security experts. With this shift in mindset and workplace culture, developers must recognize that their code — and any dependencies within that code — have implications for security. Integrating security tools from beginning to end of the coding process increases opportunities for developers and testers to discover flaws that could open applications to cybercrime.

The principles of CI and CD not only serve to automate processes but also lead to more the frequent checks and controls for coding, testing, and version control. Integrating security into the development process provides a greater window for mitigating or eliminating business risks while shortening the delivery cycle.

Another Alternative Exists: SecDevOps vs. DevSecOps

Development teams always search for methods to create better code and to decrease the time needed to bring products to market. While DevOps and DevSecOps offer distinct advantages in terms of speed and security, another alternative has entered the development arena. SecDevOps moves teams beyond integrating security into each stage of software development by prioritizing security and eliminating vulnerabilities across the lifecycle. Within the SecDevOps environment, developers work as security experts who write code.

When comparing SecDevOps vs. DevSecOps, SecDevOps places less emphasis on continuous assessment and communication. Instead of emphasizing business practices, businesses, and reducing time-to-market, SecDevOps may sacrifice speed and efficiency for security. However, the SecDevOps vs. DevSecOps comparison takes another turn when considering security testing and risk mitigation.

With DevSecOps, security testing occurs at the completion of the coding cycle. Because SecDevOps prioritizes security, testing happens at the beginning of the software development cycle. Development and Operations teams apply security policies and standards during the planning phase as well as within each development phase.  Creating clean, bug-free code becomes the responsibility of everyone on the respective teams.

The transition to SecDevOps requires coders who have an intimate knowledge of security policies and standards. Although SecDevOps may reduce errors in code — and subsequently cut development costs, some costs may range higher because of the need to train or hire coders who have the ability to recognize and implement security protocols. SecDevOps also requires lengthier planning processes that can add costs to the development cycle. SecDevOps teams may also request specialized software to detect bugs and tools for improved data protection. As a result, the costs of prioritizing security may not align with all the benefits that businesses seek.

SecDevOps vs. DevSecOps vs. DevOps… and the Winner Is…

Ultimately, customers win in DevOps vs. DevSecOps vs. SecDevOps comparison. Each offers significant advantages — and similar principles exist in each method. However, the definition of “win” varies and certainly could involve the phrase, “it depends.” While DevOps brings development and operations teams together for better communication and cooperation, DevSecOps maintains the emphasis on teams, customers, and time-to-market but slightly changes the model by inserting security at each stage of the development process. SecDevOps places much less emphasis on speed while protecting the customer from vulnerabilities that lead to cyberattacks and loss of reputation or business.

Today — and well into the future — customers seek a balance between achieving business goals and protecting against vulnerabilities. Including security from start to finish while maintaining the ability to quickly deliver applications to customers and to quickly adapt to customer needs gives DevSecOps a business advantage.

The post DevOps, DevSecOps, and SecDevOps Offer Different Advantages appeared first on The New Stack.

]]>
What Are CIS Benchmarks in Cloud Security? https://thenewstack.io/what-are-cis-benchmarks-in-cloud-security/ Thu, 14 Sep 2023 13:51:40 +0000 https://thenewstack.io/?p=22718101

The process of securing software, IT systems and network infrastructure requires adopting best practices, tools and techniques to make it

The post What Are CIS Benchmarks in Cloud Security? appeared first on The New Stack.

]]>

The process of securing software, IT systems and network infrastructure requires adopting best practices, tools and techniques to make it worthwhile. There is no one-size-fits-all rule in regard to establishing a minimum status quo in cybersecurity operations.

Today, there are several options for securing infrastructure services that enable organizations to adopt a strong security posture (and improve their existing one). The Center for Internet Security (CIS) benchmarks (an extensive catalog of standards used as a baseline for security best practices) are at the top of this list. By having a reference guide for minimum security controls, organizations can compare their practices against a consensus level.

This article explores CIS benchmarks, including what they are, why they were established and how to effectively evaluate them in the context of cloud security.

What Are CIS Benchmarks?

CIS benchmarks are consensus-based configuration baselines and best practices for securing systems. They are individually divided into different categories focused on a particular piece of technology. These categories include:

  • Operating systems
  • Server software
  • Desktop software
  • Mobile devices
  • Networks
  • Cloud providers
  • Printing machines

In other words, the CIS benchmarks framework provides a list of the minimum required security controls and practices for running secure workloads.

The benchmarks come with complete reference documents, which catalog them one by one using specific criteria like applicability, severity, rationale and auditing steps.

 

The CIS benchmarks framework provides a list of the minimum required security controls and practices for running secure workloads.

Fig. 1 — Example of CIS for Linux

In addition to the benchmark documents, CIS also offers hardened images for major public providers. These images save security teams the time they would otherwise have spent trying to bake the recommendations into their virtual machines from scratch.

Before we discuss benchmark rules in depth, let’s review an example of a benchmark.

Example Benchmark

Let’s look at one of the benchmarks from the CIS Distribution Independent Linux guide:

1.3.2 Ensure filesystem integrity is regularly checked (scored).

Profile Applicability:

Level 1 — Server

Level 1 — Workstation

Description: Periodic checking of the filesystem integrity is needed to detect changes to the filesystem.

Rationale: Periodic file checking allows the system administrator to determine on a regular basis if critical files have been changed in an unauthorized fashion.

Audit: Run the following to verify that aidcheck.service and aidcheck.timer are enabled and running:

# systemctl is-enabled aidcheck.service

# systemctl status aidcheck.service

# systemctl is-enabled aidcheck.timer

# systemctl status aidcheck.timer


Remediation: Run the following commands:

# cp ./config/aidecheck.service /etc/systemd/system/aidecheck.service

# cp ./config/aidecheck.timer /etc/systemd/system/aidecheck.timer

# chmod 0644 /etc/systemd/system/aidecheck.*

# systemctl reenable aidecheck.timer

# systemctl restart aidecheck.timer

# systemctl daemon-reload


There are a few important characteristics of each benchmark that you should understand in detail:

  • Applicability This shows which systems or services this benchmark applies to (since the current guide is for Linux, the main options are servers or workstations).
  • Scored vs. unscored A status of scored or automated means that the benchmark can be automated into a workflow (which leads to quicker implementation and faster identification of misalignments). On the other hand, a status of unscored or manual means that you cannot provide a pass/fail assessment score using automated tooling (which makes auditing more difficult).
  • Audit steps included Whenever possible, a list of auditing steps is included so that the reader can quickly check the benchmark.
  • Remediation steps included This is a series of commands that either set up or restore the benchmark to the correct status after a failure.

Now that you have a better picture of what the benchmarks look like, we’ll give you some specific recommendations for how to use them in cloud security workloads.

How to Use CIS Benchmarks in Cloud Security

Below, we’ll discuss the key areas for covering benchmark recommendations with CIS.

CIEM

Identity security services are a must-have for interfacing with any reputable cloud provider. On the other hand, the ineffective use or misconfiguration of access control policies can significantly weaken an organization’s overall security posture.

A common risk when configuring cloud infrastructure entitlements management (CIEM) is having overly permissive identities or too many policies for the security teams to maintain. The CIS benchmarks require that you review individual cloud providers’ documents (AWS, Azure and so on) for specific identity security rules.

For example, if you are operating with AWS, the CIS Amazon Web Services Foundations benchmark contains more than 23 benchmarks related to IAM. These recommendations need to be evaluated, applied to the account holder and audited for compliance.

That’s why many organizations use automated tools to monitor CIS compliance. CIS also offers free and premium tools that you can use to scan IT systems and generate CIS compliance reports. These tools alert system admins if the existing configurations don’t meet the CIS benchmark recommendations.

On the other hand, you can tackle this problem by offsetting the risk to a dedicated cloud security custodian. By using a novel solution like Orca’s IAM Remediation, which can manage and provide accurate suggestions for IAM policies, you can relieve your team of the burden of having to accurately implement the baseline controls manually.

Identity security and Access Management (IAM) services are a must-have for interfacing with any reputable cloud provider.

Fig. 2 — IAM Recommendations (source: https://orca.security/)

Data Security

Data security represents another critical area that warrants proper compliance. Data breaches and the exposure of sensitive PII can be devastating, both financially (since the lack of safety controls can result in lawsuits and fines) and in terms of reputational damage.

Since private data is a primary target of adversary attacks and foreign agents, it appears on the CIS benchmark list in many areas. For example, there are dedicated benchmarks for key rotations, setting the right permissions for data stored on disks and ensuring encryption both at rest and in transit. The following are a few examples:

  • In Kubernetes: 6.9 Storage 6.9.1 — Consider enabling Customer-Managed Encryption Keys (CMEK) for GKE Persistent Disks (PD).
  • In AWS: 2.8 — Ensure rotation for customer-created CMKs is enabled.
  • In Red Hat Linux: 2.2.20 — Ensure rsync is not installed or the rsyncd service is masked (automated).
  • In Red Hat Linux: 1.3.1 — Ensure AIDE is installed (automated).

Again, to support these benchmarks, you’ll need to have a catalog of your organization’s systems and software, validate the existing security profiles and make adjustments to cover the baseline CIS recommendations when needed.

The Orca Cloud Security Platform provides a Data Security Posture Management (DSPM) module that deals specifically with data security remediation out of the box. It offers a context-driven view of any sensitive data exposures, misconfigurations and current risks inside the organization’s data stores. Having a continuous service for data security compliance simplifies security operations and improves overall safety.

Kubernetes Benchmark

Kubernetes security is of considerable interest nowadays since many organizations are migrating their workloads to this technology. To ensure compliance and reliability, having an up-to-date and reliable security baseline for Kubernetes workloads is a must.

More specifically, there is a requirement that relevant security controls are aware of the Kubernetes architectural components and their security holes. CIS provides extensive benchmark material for securing K8s workloads that covers both base distributions and cloud providers.

Following the recommended approaches for K8s requires an extensive orientation process, since a typical deployment consists of many moving parts and components. For example, there are more than 60 recommendations in the CIS Google Kubernetes Engine (GKE) to date.

 There are more than 60 recommendations in the CIS Google Kubernetes Engine (GKE) to date.

Fig. 3 — GKE Recommendations (source: https://www.cisecurity.org/benchmark/kubernetes)

The ephemeral nature of pods does not make this job any easier. You’ll need to invest a lot of time and resources to achieve security automation that covers the CIS benchmark levels.

An agentless security paradigm can help scale security recommendations and best practices while supporting thousands of containers and nodes. With Orca’s Container and Kubernetes Security module, you get better insights into any security gaps in your K8s clusters within minutes.

Next Steps with CIS Benchmarks

If you want to learn more about the CIS benchmarks, I recommend downloading the free resources from the official site. Take some time to review the benchmark recommendations and check which of the areas you should focus on. This will provide you with a more-appropriate context for learning how to properly secure things and why.

Next, you’ll want to evaluate and automate the relevant CIS benchmarks for your organization. This will ensure that you separate the minimum required rules from unnecessary controls or policies to improve your security levels as a whole.

Finally, you’ll want to level up your infrastructure security baseline by utilizing a cloud native application protection platform (CNAPP) like Orca Security. Since they can offload most of the menial tasks through automation and advanced technology, the benefits of such services are multiplied. Request a demo or sign up for a free cloud risk assessment to see how the Orca Cloud Security Platform can help you achieve a new level of security and visibility in the cloud.

Further Reading

The post What Are CIS Benchmarks in Cloud Security? appeared first on The New Stack.

]]>
How Attackers Bypass Commonly Used Web Application Firewalls https://thenewstack.io/how-attackers-bypass-commonly-used-web-application-firewalls/ Wed, 13 Sep 2023 15:31:35 +0000 https://thenewstack.io/?p=22718098

Cloud-based web application firewalls (WAFs) sport an impressive array of protections. Yet many hackers claim they can easily bypass even

The post How Attackers Bypass Commonly Used Web Application Firewalls appeared first on The New Stack.

]]>

Cloud-based web application firewalls (WAFs) sport an impressive array of protections. Yet many hackers claim they can easily bypass even the most sophisticated WAFs to execute attack queries against protected assets with impunity.

The threat research team atNetScaler, an application delivery and security platform, found that many cloud-based WAFs can be readily circumvented. If you have committed to paying for a WAF service, you need to run tests to ensure that your WAF can do — and is doing — what it’s supposed to do to protect your applications and APIs.

If you take away nothing else, I implore you to run some easy tests against your environment to check that your WAF service is protecting optimally. At the end of this article, I’ve outlined a few simple but often-overlooked steps to help you identify if someone is already bypassing your WAF and compromising the security of your web applications and APIs. But first, let’s look at the most common ways that attackers get around WAF defenses.

The Most Common WAF Attacks

Cloud-based and on-premises WAFs are security solutions delivered as a service that helps protect web applications and APIs from a variety of attacks that are documented by the Open Web Application Security Project (OWASP). The most common WAF attacks include:

Injection

When it comes to robbing a ton of data through a keyhole like a web application, then SQL injection is the way to go. Injection attacks were first documented more than 25 years ago and are still commonly used today.

The beginning of a database query is often designed to retrieve all information, followed by a filter to only show one piece of information. For example, a commonly used query is one that initially retrieves all customer information but then filters for a specific customer ID. The database executes this command against every line in the table and will return the requested information on the table row(s) where this statement is true. Usually, this is one single row. Attackers manipulate the form fields that are used to populate such queries to insert database commands, resulting in a statement that evaluates to true for every row in the table, which returns the contents of the entire table in the response. In an ideal world, developers would always secure their forms, so injection attacks would not be possible. However developers can be prone to error on occasion, so not all form fields are protected all of the time.

The latest OWASP Top Ten list now includes cross-site scripting in its injection category. Cross-site scripting is where attackers insert their scripts into your website or your web URLs so that unsuspecting victims execute them in their browsers, allowing attackers to transmit cookies, session information, or other sensitive data to their own web servers.

Broken Access Control

Broken access control allows an attacker to act outside of the intended expected behavior of the application or API developer. This vulnerability can lead to unauthorized information disclosure, modification or destruction of all data, and the ability to perform a business function outside the user’s limits, with some exclusive to APIs.

OWASP recently raised the criticality of broken access control to number 1 on its top 10 list of web application vulnerabilities. The reason for its newfound importance lies in the fact that this vulnerability category is especially applicable to APIs — a relatively new vector compared to web applications, which have been around for a long time. Attackers find APIs and attempt to exfiltrate information from them. And because APIs are not designed for human input, the same sort of validation inputs and checks used for web applications may not be top of mind for developers. Sometimes APIs are published without the knowledge of the security and operations teams.

Vulnerable and Outdated Components

Whenever a new vulnerability is found in a commonly used component, it results in a massive spate of bot-generated traffic scanning the internet, looking for systems that can be compromised. If you set up a web server and make it available to the internet, you will quickly see log entries for requests made to specific types of applications that do not exist on your newly created web server. This activity is simply the hacker network casting a wide net looking for vulnerable servers to harvest.

The primary function of a WAF is to examine the contents of an HTTP request — including the request body and request headers where the attack payloads are located — and decide if the request should be allowed or blocked. Some WAFs will also inspect responses to assess if there is an unauthorized leaking of data or sensitive information. They will also record the response structure (a web form or cookies, for example), which effectively ensures that subsequent requests are not tampered with.

The 3 Types of WAFs

Web application and API firewalls generally come in three models: negative, positive, and hybrid:

  • The negative security model uses simple signatures and is pre-loaded with known attacks that will block a request if there is a match. Think of this as a “deny list.” In other words, the default action is “allow” unless it finds a match.
  • The positive security model is pre-loaded with a pattern of known “good” behavior. It compares requests against this list and will only allow the request through if it finds a match. Everything else gets blocked. This would be considered an “allow list.” In this case, the default action is “block” unless it finds a match. The positive security model is considered much more secure than the negative security model — and it can block zero-day attacks.
  • The hybrid security model uses signatures as a first pass and then processes the request to see if it matches the allow list. You would be correct to ask, “Since an attack would not be on the allow list, why use an allow list?” The reason why is that less processing is required with the negative security model that uses signatures to block requests vs. processing everything through the positive security model. More processing equates to larger WAF appliances or to higher costs for cloud-based hosting.

All three WAF security models have one thing in common: They examine the inbound request and look for threats. The effectiveness of request-side examination depends on what the WAFs are looking for and how granularly they inspect the request payload.

How Attackers Take Advantage of WAF Limitations and IT’s Lack of Due Diligence

Attackers are aware that looking for attacks in traffic is computationally expensive for most organizations, and that commercial inspection solutions are designed to match real-world use cases as efficiently as possible. They know that real-world HTTP(S) GET or POST requests are usually only a few hundred bytes, maybe 1-2 kilobytes with some big cookies.

And attackers know that many WAF solutions will only scan a small, finite quantity of bytes for a request when looking for that Bad Thing. If WAFs don’t find it there, or if the request is bigger than 8 kilobytes as per NetScaler’s testing, many WAFs will not scan the request. They will consider it an anomaly and simply forward it on. I’ll say that again: Many WAFs simply forward the request with no blocking and no logging.

Wow.

The WAF ‘Hack’ Explained

To bypass WAFs, attackers leverage SQL injection or cross-site scripting and pad out the request with garbage to get it past the 8-kilobyte size and then hit send. Padding a request can be as simple as adding a huge header or cookie or other POST body text in the case of a login form. The request is not scanned and is passed through to the backend server where the payload is executed.

Some WAFs can be configured to counter padded attacks, but this protection is not turned on by default. Speculating as to why this is so, I can only arrive at the conclusion that turning on such protection requires extra processing, which drives up costs for WAF users. Not wanting their WAFs to be perceived as more expensive than their competitors, vendors leave additional protections disabled. Be aware that your web applications and APIs are fully exposed if you don’t change the default setting.

A single-pass WAF architecture that is available with a WAF solution like NetScaler performs miles better than traditional proxy strategies, which is why NetScaler can enable the protections against padded attacks out of the box without the added costs.

Are These WAF Vulnerabilities New?

Padded attacks are not new, and WAF vendors are well aware of the issue. But the WAF industry as a whole has not addressed the need for the most effective protection to be turned on by default.

Some analysts have communicated this gap in security with the vendors in question, with the vendor responses being along the lines of, “This is a known and documented limitation, and customers should apply this specific rule if they want this protection.” But the workaround is often buried in the nuts and bolts of the WAF configuration guide, and admins and deployment operators can (and do!) miss it.

In today’s world, where things need to “just work” when turned on and where there is the expectation that every solution used by IT will simplify tasks and reduce administrative overhead, WAFs need to be secured from the start. Sure, if a legitimate request needs to be bigger, then it will be blocked. That’s where exceptions can be made, and admins are aware of the risk when they do so. But leaving an entire site exposed should never be a consideration.

Attackers know that many WAFs do not have protections turned on by default, which is why they take advantage of this vulnerability with padded attacks. A couple of the WAFs that NetScaler tested were not vulnerable to this attack method, but many were. Some WAFs had slightly larger request limits (128 kilobytes) but were just as easy to bypass once the body was padded out. Some solutions favor this “fail open” approach to avoid additional costs resulting from extra processing, to prevent unexpected false negatives, and to allow for a more simplified — though less secure — setup.

However, the “fail open” approach violates the “strong defaults” principle of cybersecurity that we should expect from security vendors. When choosing a WAF, you need to ensure that you are protected out of the box against padded attacks.

The Takeaway: 3 Simple Steps to Securing Your WAF

Your WAF solution may not be correctly configured, leaving your web applications and APIs completely exposed to attackers who can easily deploy padded attacks via SQL injection and cross-site scripting.

As you race off to check your WAF configuration, here are your three must-dos:

  • Test your web applications (both internal and external) with padded requests.
  • Examine web application logs for large request sizes where they are not expected: For example, look at a login POST form that typically contains just a username and password and ranges in size from approximately 20 to 300 bytes. If you see POST requests that are greater than 8 kilobytes in size, then this may be a padded attack attempt.
  • Evaluate whether you can make a configuration change that will mitigate padded attacks and, if you can, make sure to compare the before-and-after costs so that you get an accurate cost for the added protection.

By following this simple guidance, you can correctly configure your WAF to improve the security of your web applications and APIs.

The post How Attackers Bypass Commonly Used Web Application Firewalls appeared first on The New Stack.

]]>
Update NOW: OpenSSL 1.1.1’s Shelf-Life Has Ended https://thenewstack.io/update-now-openssl-1-1-1s-shelf-life-has-ended/ Wed, 13 Sep 2023 14:20:41 +0000 https://thenewstack.io/?p=22718092

OpenSSL is the most popular SSL (Secure Socket Layer) and TLS (Transport Layer Security) program in Linux, Unix, Windows, and

The post Update NOW: OpenSSL 1.1.1’s Shelf-Life Has Ended appeared first on The New Stack.

]]>

OpenSSL is the most popular SSL (Secure Socket Layer) and TLS (Transport Layer Security) program in Linux, Unix, Windows, and numerous operating systems. Besides operating systems, it’s used in web, security, and cloud applications. In other words, if you use anything requiring network security, chances are good you’re using OpenSSL.

So, you should pay attention when the OpenSSL Project officially announced the End of Life (EOL) for its Long Term Support (LTS) 1.1.1 version as of Sept. 11, 2023.  From here on out,  the 1.1.1 series will no longer get publicly available security updates.

Users who have procured OpenSSL 1.1.1 from an operating system vendor, such as through .rpm or .deb packages, or any other third-party source, might experience different support timelines. I wouldn’t bet on it, though. You must consult with your vendors to understand your support options.

In the meantime, as Alex Rybak, security expert and technology company Revenera’s Senior Director of Product Management, wrote on LinkedIn. “Make sure to update your OSS [open-source software] policies to auto-reject OpenSSL v1.1.1* since there will no longer be any security patches. Don’t forget to check your 3rd-party binaries for embedded versions of OpenSSL”.

Better still, upgrade to OpenSSL 3.1. This version will be supported until March 14, 2025. Or, better still, from where I sit, move to OpenSSL 3.0, which is an LTS release. It will be supported until Sept. 7, 2026.

The difference between 3.0 and 3.1 is that 3.1 includes some non-Federal Information Processing Standard (FIPS) validated algorithms. These algorithms are Triple DES ECB, Triple DES CBC, and EdDSA. Unless you specifically need one of these — or you don’t trust the FIPS algorithms — OpenSSL 3.0 is for you.

Of course, upgrading comes with its own problems. As OpenSSL warns, “Any application that currently uses an older version of OpenSSL will at the very least need to be recompiled in order to work with the new version.”

Ouch.

So, it’s possible that you really may be stuck supporting OpenSSL 1.1.1 for years to come. If that’s you, the OpenSSL Project offers a premium support contract. And, when I say “premium,” I mean premium.

Enterprise customers that have OpenSSL 1.1.1 or OpenSSL 1.02 baked into their applications or services can pay $50 thousand a year for extended support, including security fixes; Vendor Level Support for businesses using OpenSSL for a single product or product line costs $25 thousand annually; and Basic Support for companies that use OpenSSL in significant products or services and lack the internal resources to addressing their operational and application development issues. This last level will run you $15-thousand. Although OpenSSL does explicitly state this, I presume the Project will also offer security patches to lower support level customers.

This extended support doesn’t have a fixed end date. The OpenSSL Project aims to offer it as long as it remains a commercially viable option.

Shifting over is going to take a while. I know there are many programs embedded, and Internet of Things (IoT) devices that rely on OpenSSL 1.1.1, which will not be updated. The legacy problem will bite many users and companies in the rump. Eventually, you’ll have to upgrade. But, it’s going to take longer than everyone wants.

The post Update NOW: OpenSSL 1.1.1’s Shelf-Life Has Ended appeared first on The New Stack.

]]>
Chae$ 4: The Evolution of a Cyberthreat https://thenewstack.io/chae-4-the-evolution-of-a-cyberthreat/ Mon, 11 Sep 2023 19:03:40 +0000 https://thenewstack.io/?p=22717946

Chae$ 4 isn’t your run-of-the-mill Chaes malware variant. The earlier versions of Chaes stole information, primarily login credentials, from browsers.

The post Chae$ 4: The Evolution of a Cyberthreat appeared first on The New Stack.

]]>

Chae$ 4 isn’t your run-of-the-mill Chaes malware variant. The earlier versions of Chaes stole information, primarily login credentials, from browsers. It could also capture screens, monitor browsers, and perform reconnaissance. Annoying, but nothing to write home about. Now, the endpoint security company Morphisec has discovered a new and advanced variant, Chae$ 4. This variant primarily targets the logistics and financial sectors, which means business.

The primary targets have been prominent platforms and banks, including Mercado Libre, Mercado Pago, WhatsApp Web, Itau Bank, Caixa Bank, and MetaMask. Additionally, many content management system (CMS) services, such as WordPress, Joomla, Drupal, and Magento, have also been compromised.

Along with targeting FinOps companies, Chase has undergone significant revamps, from a complete rewrite in Python, which led to decreased detection rates by traditional defense systems, to a full redesign with an enhanced communication protocol. The malware now also features a range of new modules that amplify its malicious capabilities.

Specifically, it now boasts:

  • Enhanced code architecture and modularity.
  • Increased encryption and stealth capabilities.
  • Shift to Python for decryption and dynamic in-memory execution.
  • Replacement of Puppeteer with a custom approach for monitoring Chromium browsers.
  • Expanded target services for credential theft.
  • Use of WebSockets for communication between modules and the C2 server.
  • Implementation of domain generation algorithm (DGA) for dynamic C2 server address resolution.

The malware initiates with a deceptive MSI Windows installer, typically masquerading as a JAVA JDE installer or Antivirus software. Once executed, the malware deploys and downloads its required files, activating the core module, ChaesCore. This module sets persistence and migrates into targeted processes, subsequently starting its malicious activities.

During the investigation, Morphisec identified seven distinct modules, each with its unique functionalities. Notably, the threat actor displays a pronounced interest in cryptocurrency, evident from the clipper’s usage to steal BTC and ETH and the module that pilfers MetaMask crypto wallet credentials.

If you want to know more, check out Morphisec’s in-depth technical analysis

of Chae$ 4. Stay informed, stay safe.

The post Chae$ 4: The Evolution of a Cyberthreat appeared first on The New Stack.

]]>
Is Security a Dev, DevOps or Security Team Responsibility? https://thenewstack.io/is-security-a-dev-devops-or-security-team-responsibility/ Thu, 07 Sep 2023 13:26:09 +0000 https://thenewstack.io/?p=22717624

No matter what role you work in — software development, DevOps, ITOps, security or any other technical position — you

The post Is Security a Dev, DevOps or Security Team Responsibility? appeared first on The New Stack.

]]>

No matter what role you work in — software development, DevOps, ITOps, security or any other technical position — you probably appreciate the importance of strong cyber hygiene.

But you may be unsure whose job it is to act on that principle. Although the traditional approach to cybersecurity at most organizations was to expect security teams to manage risks, security engineers often point fingers at other teams, telling them it’s their job to ensure that applications are designed and deployed securely.

For their part, developers might claim that security is mainly the responsibility of DevOps or ITOps, since those are the teams that have to manage applications in production — the place where most attacks occur — whereas developers only design and build software.

Meanwhile, the operations folks often point their fingers back at developers, arguing that if there are vulnerabilities inside an application that attackers exploit once the app is in production, the root cause of the problem is mistakes made by developers, not DevOps or ITOps engineers.

On top of all of this, engineers can treat other stakeholders as bearing primary responsibility for security. They might say that if a breach occurs, it’s because a cloud provider didn’t have strong access controls or because end users did something irresponsible, for example.

Cloud Security Is a Collective Responsibility

Who’s right? Nobody, actually. Security is not the job of any one group or type of role.

On the contrary, security is everyone’s job. Forward-thinking organizations must dispense with the mindset that a certain team “owns” security, and instead embrace security as a truly collective team responsibility that extends across the IT organization and beyond.

After all, there is a long list of stakeholders in cloud security, including:

  • Security teams, who are responsible for understanding threats and providing guidance on how to avoid them.
  • Developers, who must ensure that applications are designed with security in mind and that they do not contain insecure code or depend on vulnerable third-party software to run.
  • ITOps engineers, whose main job is to manage software once it is in production and who therefore play a leading role both in configuring application-hosting environments to be secure and in monitoring applications to detect potential risks.
  • DevOps engineers, whose responsibilities span both development and ITOps work, placing them in a position to secure code during both the development and production stages.
  • Cloud-service providers, who are responsible for ensuring that underlying cloud infrastructure is secure, and who provide some (though certainly not all) of the tooling (like identity and access management frameworks) that organizations use to protect cloud workloads.
  • End users, who need to be educated about cloud security best practices in order to resist risks like insecure sharing of business data between applications and phishing attacks.

It would be nice if just one of these groups could manage all aspects of cybersecurity, but they can’t. There are too many types of risks, which manifest across too many different workflows and resources, for cloud security to be the responsibility of any one group.

Every Organization — and Every Security Responsibility Model — Is Different

On top of this, there is the challenge that, depending on your organization, not all of the groups above may even exist. Maybe you no longer have development and ITOps teams because you’ve consolidated them into a single DevOps team. Maybe you’re not large enough to employ a full-time security team. Maybe you don’t use the public cloud, in which case there is no cloud provider helping to secure your underlying infrastructure.

My point here is that organizations vary, and so do the security models that they can enforce. There is no one-size-fits-all strategy for delegating security responsibilities between teams or roles.

Putting DevSecOps into Practice

All of the above is why it’s critical to operationalize DevSecOps — the idea that cloud security is a shared responsibility between developers, security teams, and operations teams — across your organization.

Now, this may seem obvious. There’s plenty of talk today about DevSecOps and plenty of organizations that claim to be “doing” DevSecOps.

But just because a business says it has embraced DevSecOps doesn’t necessarily mean that security has seeped into all units and processes of the business. Sometimes DevSecOps is just jargon that executives toss around to sound like they take security seriously, even though they haven’t actually changed the organizational culture surrounding security. Other times, DevSecOps basically means that your security team talks to developers and ITOps, but your business still treats the security team as the primary stakeholder in security operations.

Approaches like these aren’t enough. In a world where every year sets new records for the pace and scope of cyberattacks, security truly needs to be the job of your entire organization — not just technical teams, but also nontechnical stakeholders like your “business” employees and even external stakeholders such as cloud-service providers and partners. It’s only by enforcing security at every level of the organization, and at every stage of your processes, that you can move the needle against risks.

Conclusion: To Change Security, Change Your Mindset

So, don’t just talk about DevSecOps or rest on your laurels because you’ve designated a certain group of engineers as the team that “owns” security. Strive instead to make cloud security a priority for every stakeholder inside and outside your business who plays a role in helping to protect IT assets. Until the answer to “who’s responsible for security?” is “everyone,” you’ll never be as secure as you can be.

Want to take charge of your cloud security? The Orca Cloud Security Platform offers comprehensive visibility into your cloud environment, providing prioritized alerts for vulnerabilities, misconfigurations, compromises and other potential threats across your entire inventory of cloud accounts. To get started, request a demo of the Orca cloud-security platform or sign up for a free cloud risk assessment today.

Further Reading

The post Is Security a Dev, DevOps or Security Team Responsibility? appeared first on The New Stack.

]]>
Britive: Just-in-Time Access across Multiple Clouds https://thenewstack.io/britive-just-in-time-access-across-multiple-clouds/ Thu, 07 Sep 2023 10:00:41 +0000 https://thenewstack.io/?p=22717397

Traditionally when a user was granted access to an app or service, they kept that access until they left the

The post Britive: Just-in-Time Access across Multiple Clouds appeared first on The New Stack.

]]>

Traditionally when a user was granted access to an app or service, they kept that access until they left the company. Unfortunately, too often it wasn’t revoked even then. This perpetual 24/7 access left companies open to a multitude of security exploits.

More recently the idea of just-in-time (JIT) access has come into vogue, addressing companies’ growing attack surface that comes with the proliferation of privileges granted for every device, tool and process. Rather than ongoing access, the idea is to grant it only for a specific time period.

But managing access manually for the myriad technologies workers use on a daily basis, especially for companies with thousands of employees would be onerous. And with many companies adopting a hybrid cloud strategy, each of which with its own identity and access management (IAM) protocols, the burden grows. With zero standing privileges considered a pillar of a zero trust architecture, JIT access paves the way to achieve it.

Glendale, California-based Britive is taking on the challenge of automating JIT access across multiple clouds not only for humans but also for machine processes.

“We recognize that in the cloud, access is typically not required to be permanent or perpetual,” pointed out Britive CEO and co-founder Art Poghosyan. “Most of access is so frequently changing and dynamic, it really doesn’t have to be perpetual standing access … if you’re able to provision with an identity at a time when [users] need it. With proper security, guardrails in place and authorization in place, you really don’t need to keep that access there forever. … And that’s what we do, we call it just-in-time ephemeral privilege management or access management,”

‘Best Left to Automation’

Exploited user privileges have led to some massive breaches in recent years, like Solarwinds, MGM Resorts, Uber and Capital One. Even IAM vendor Okta fell victim.

In the Cloud Security Alliance report “Top Threats to Cloud Computing,” more than 700 industry experts named identity issues as the top threat overall.

And in “2022 Trends in Securing Digital Identities,” of more than 500 people surveyed, 98% said the number of identities is increasing, primarily driven by cloud adoption, third-party relationships and machine identities.

Pointing in particular to cloud identity misconfigurations, a problem occurring all too often, Matthew Chiodi, then Palo Alto Networks’ public cloud chief security officer cited a lack of IAM governance and standards multiplied by “the sheer volume of user and machine roles combined with permissions and services that are created in each cloud account.”

Chiodi added, “Humans are good at many things, but understanding effective permissions and identifying risky policies across hundreds of roles and different cloud service providers are tasks best left to algorithms and automation.”

JIT systems take into account whether a user is authorized to have access, the user’s location and the context of their current task. Access is granted only if the given situation justifies it, and then revokes it when the task is done.

Addressing Need for Speed

Founded in 2018, Britive automates JIT access privileges, including tokens and keys, for people and software accessing cloud services and apps across multiple clouds.

Aside from the different identity management processes involved with cloud platforms like Azure, Oracle, Amazon Web Services (AWS) and Google, developers in particular require access to a range of tools, Poghosyan pointed out.

“Considering the fact that a lot of what they do requires immediate access … speed is the topmost priority for users, right?” he said.

“And so they use a lot of automation, tools and things like HashiCorp Terraform or GitHub or GitLab and so on. All these things also require access and keys and tokens. And that reality doesn’t work well with the traditional IAM tools where it’s very much driven from a sort of corporate centralized, heavy workflow and approval process.

“So we built technology that really, first and foremost, addresses this high velocity and highly automated process that cloud environments users need, especially development teams,” he said, adding that other teams, like data analysts who need access to things like Snowflake or Google Big Query and whose needs change quickly, would find value in it as well.

“That, again, requires a tool or a system that can dynamically adapt to the needs of the users and to the tools that they use in their day-to-day job,” he said.

Beyond Role-Based Access

Acting as an abstraction layer between the user and the cloud platform or application, Britive uses an API-first approach to grant access with the level of privileges authorized for the user. A temporary service account sits inside containers for developer access rather than using hard-coded credentials.

While users normally work with the least privileges required for their day-to-day jobs, just-in-time access grants elevated privileges for a specific period and revoke those permissions when the time is up. Going beyond role-based access (RBAC), the system is flexible enough to allow companies to alternatively base access on attributes of the resource in question (attribute-based access) or policy (policy-based access), Poghosyan said.

The patented platform integrates with most cloud providers and with CI/CD automation tools like Jenkins and Terraform.

Its cross-cloud visibility provides a single view into issues such as misconfigurations, high-risk permissions and unusual activity across your cloud infrastructure, platform and data tools. Data analytics offers risk scores and right-sizing access recommendations based on historical use patterns. The access map provides a visual representation of the relationships between policies, roles, groups and resources, letting you know who has access to what and how it is used.

The company added cloud infrastructure entitlement management (CIEM) in 2021 to understand privileges across multicloud environments and to identify and mitigate risks when the level of access is higher than it should be.

The company launched Cloud Secrets Manager in March 2022, a cloud vault for static secrets and keys when ephemeral access is not feasible. It applies the JIT concept of ephemeral creation of human and machine IDs like a username or password, database credential, API token, TLS certificate, SSH key, etc. It addresses the problems of hard-coded secrets management in a single platform, replacing embedded API keys in code by retrieving keys on demand and providing visibility into who has access to which secrets and how and when they are used.

In August it released Access Builder, which provides self-service access requests to critical cloud infrastructure, applications and data. Users set up a profile that can be used as the basis of access and can track the approval process. Meanwhile, administrators can track requested permissions, gaining insights into which identities are requesting access to specific applications and infrastructure.

Range of Integrations

Poghosyan previously co-founded Advancive, an IAM consulting company acquired by Optiv in 2016. Poghosyan and Alex Gudanis founded Britive in 2018. It has raised $35.9 million, most recently $20.5 million in a Series B funding round announced in March. Its customers include Gap, Toyota, Forbes and others.

Identity and security analysts KuppingerCole named Britive among the innovation leaders in its 2022 Leadership Compass report along with the likes of CyberArk, EmpowerID, Palo Alto Networks, Senhasegura, SSH and StrongDM that it cited for embracing “the new worlds of CIEM and DREAM (dynamic resource entitlement and access management) capability.”

“Britive has one of the widest compatibilities for JIT machine and non-machine access cloud services [including infrastructure, platform, data and other ‘as a service’ solutions] including less obvious provisioning for cloud services such as Snowflake, Workday, Okta Identity Cloud, Salesforce, ServiceNow, Google Workspace and others – some following specific requests from customers. This extends its reach into the cloud beyond many rivals, out of the box,” the report states.

It adds that it is “quite eye-opening in the way it supports multicloud access, especially in high-risk develop environments.”

Poghosyan pointed to two areas of focus for the company going forward: one is building support for non-public cloud environments because that’s still an enterprise reality, and the other is going broader into the non-infrastructure technologies. It’s building a framework to enable any cloud application or cloud technology vendor to integrate with Britive’s model, he said.

The post Britive: Just-in-Time Access across Multiple Clouds appeared first on The New Stack.

]]>
SBOMs, SBOMs Everywhere https://thenewstack.io/sboms-sboms-everywhere/ Wed, 06 Sep 2023 16:13:26 +0000 https://thenewstack.io/?p=22717562

Talk about software bills of materials or SBOMs has become even more prevalent in the wake of many supply chain

The post SBOMs, SBOMs Everywhere appeared first on The New Stack.

]]>

Talk about software bills of materials or SBOMs has become even more prevalent in the wake of many supply chain attacks that have occurred in the past few years. Software supply chain attacks can target upstream elements of your software, like open source libraries and packages, and SBOMs are a way to understand what’s in your application or container images.

But while SBOMs are a useful piece of information, there are plenty of questions teams are asking about them: Do we need an SBOM? What do we do with them once we produce them? How can I use them during a security incident?

To answer these and other questions, let’s start with what an SBOM actually is.

What Is an SBOM?

A software bill of materials is a comprehensive inventory of all of the software components and dependencies used in a software application or system. This enables security teams as well as developers to have a better understanding of the third-party resources and imports they are using, particularly when new vulnerabilities in open source packages are constantly being discovered. To protect your organization from these threats, you first have to know what you even have in your stack.

Containers have become the de facto way that developers package and ship software in today’s cloud native landscape. In the context of containers and SBOMs, the equivalent of a software bill of materials in a container is a JSON file listing all the packages, libraries and components used in both the application and the surrounding container. This package.json file includes version information of all of these components, where the package is machine-readable, which is no less important.

If we were to liken this to something we’re all familiar with, this JSON is almost like the nutrition label you’d find on packaged food, but for your containerized application. This JSON file is also a point-in-time artifact, meaning it is tied to a specific SHA256 digest of a container. This differs from mutable tags like latest, in that an SBOM for a mutable tag would change over time, but not for a specific SHA256 digest.

Another important aspect that your package.json provides is all the historical information that is ultimately managed by git, giving you critical retrospective knowledge of what was running in your container in any given build. This will enable you to take action when a vulnerability is discovered by knowing what is in the containers that are currently running in production. This also makes the data easier to search for rapid inventory with zero-day attacks.

The Log4j zero-day attack that occurred in December 2021 was believed to have affected over 100 million software environments and applications. CISA recognized this attack as a threat to governments and organizations across the world. Cyber Safety Review Board postmortem report on Log4j backed by OpenSSF and the Linux Foundation encouraged the industry to use software component tools such as SBOMs to reduce the time between awareness and mitigation of vulnerabilities.

SBOMs will make it easier for companies to understand which version of containers they are running in production and how exposed their production systems are when a Log4j-type incident occurs. Like many areas of the software supply chain, there are plenty of excellent open source tools that can produce an SBOM for you, such as Syft, Trivy, BOM and CycloneDX, as well as others that provide commercial services.

Show Me the Code

Below you’ll find an example of a slice of an SBOM from a Node container image from Docker Hub with over 1.4 billion pulls. We hopped into the Slim platform (portal.slim.dev/login) to search for the public Node image, analyzed the image for vulnerabilities, and then downloaded the SBOM directly off the platform in a CycloneDX JSON format.

"$schema": "http://cyclonedx.org/schema/bom-1.4.schema.json",
 "bomFormat": "CycloneDX",
 "specVersion": "1.4",
 "serialNumber": "urn:uuid:aaf2dfd5-5294-4277-8cc1-f7fe6f6d514b",
 "version": 1,
 "metadata": {
   "timestamp": "2023-06-21T13:26:31Z",
   "tools": [
     {
       "vendor": "slim.ai",
       "name": "slim",
       "version": "0.0.1"
     }
   ],
   "component": {
     "bom-ref": "b1ef6d159e61300a",
     "type": "container",
     "name": "index.docker.io/library/latest:latest",
     "version": "sha256:b3fc03875e7a7c7e12e787ffe406c126061e5f68ee3fb93e0ef50aa3f299c481"
   }
 },


You can see the highlighted metadata that’s provided. A full example of an SBOM download would include all the associated components, packages, libraries, a short description of their purpose/use, publishers, distribution types and their dependencies.

So this all begs the question we started out with: What do we actually do with an SBOM? The truth is, this is still a work in progress, and there are many senior developers and security engineers who have a perfect answer for this. For the most part, the goal of an SBOM is to have this inventory accessible, backed up and safe. Many times you’ll find an SBOM stored in an artifact repository, backed up to an S3 bucket or hosted with a provider to enable easy access when knowing what’s running in your container becomes mission critical.

Not only is there a question of how to truly extract value out of these types of artifacts, but how will teams manage them as containers continue to grow in size and complexity? This growth can lead to longer CI/CD processing times and an increased workload for DevSecOps teams. The use and management of SBOMs will continue to have a spotlight in software supply chain management.

The Impending Importance of Software Transparency

Chris Hughes and Tony Turner break down the fundamental principles of what SBOMs encapsulate in their latest book, “Software Transparency,” where the function of SBOMs is described as a foundational element in achieving software transparency, enabling organizations to identify potential vulnerabilities and proactively address them. Although there are concerns about SBOMs providing visibility for attackers, Hughes states that “having an SBOM puts software consumers in a much better position to understand both the initial risk associated with software use as well as new and emerging vulnerabilities associated with software components in the software they consume.”

According to Gartner, by 2025, SBOMs will be a requirement for 60% of software providers, as they become a critical component to achieving software supply chain security. This predictive insight exemplifies the need for SBOM generation ahead of the demand that is inevitably growing.

AWS recently announced its support for SBOM export capabilities in Amazon Inspector, a vulnerability management service that scans AWS workloads across the entire AWS organization to gain insight into your software supply chain. Heavy hitters such as Amazon Web Services (AWS) or Docker releasing SBOM exportability features is a glaring hint that the demand and urgency for providing software transparency from software providers is expected to increase. Slim.AI also provides a pathway for generating and managing SBOMs for your container images.

The Slim Solution for the SBOM Surge

In the ever-evolving landscape of software supply chain security (SSCS), staying ahead of future requirements is imperative. Slim uses advanced scanning and analysis capabilities to generate SBOMs that you can immediately download. We thoroughly inspect the entire stack to extract crucial information and construct a detailed inventory of components. This enables organizations to maintain a robust security posture while ensuring compliance with evolving regulatory requirements.

NTIA recommends generating and storingSBOMs at build time or in your container images in preparation for new releases. On the Slim platform, you can connect to your container registries (such as AWS, Docker Hub, Google Container Registry, and others) to store SBOMs for each of your container images. SBOMs are generated for both the original and hardened container images as part of the many artifacts that are accessible via the platform. Flow through our container hardening process on the platform to generate a smaller, more optimized, and less vulnerable version of your container image to deploy to production.

The Evolving Future of SBOMs

In the Congressional hearings that followed Log4J, the message from the cloud-native industry was clear: SBOMs are just a starting point. While full software inventories are necessary for triaging risk in the event of an attack, they are not by themselves a means to prevent attacks. There’s excitement around what new tools will be made available that use SBOMs as their source of truth.

Until then, most registries are working on the capability to store and manage SBOMs directly inside your registry of choice.

With hugely popular containers and packages having built-in and maintained SBOMs, it will be much easier and faster to start mitigating and reducing risk with resources taken from the wild. In addition, many security organizations like OWASP and the OpenSSF are working toward making tooling more accessible and dev-friendly to drive adoption and wider usage.

Added measures like slimming and hardening containers can also add greater security benefits in ensuring you only ship to production the critical packages truly required by your application. This will provide us with greater trust in our third-party packages and imports, and greater security for our entire software supply chain.

The post SBOMs, SBOMs Everywhere appeared first on The New Stack.

]]>
Open Source Needs Maintainers. But How Can They Get Paid? https://thenewstack.io/open-source-needs-maintainers-but-how-can-they-get-paid/ Wed, 06 Sep 2023 10:00:59 +0000 https://thenewstack.io/?p=22717420

Jordan Harband is the sort of person the tech industry depends on: a maintainer of open source software projects. Lots

The post Open Source Needs Maintainers. But How Can They Get Paid? appeared first on The New Stack.

]]>

Jordan Harband is the sort of person the tech industry depends on: a maintainer of open source software projects.

Lots of them — by his count, about 400.

Harband, who has worked at Airbnb and Twitter, among other companies, was laid off from Coinbase more than a year ago. The Bay Area resident is now a contractor for the OpenJS Foundation, as a security engineering champion.

He also gets paid for some of his freelance open source maintenance work, by Tidelift and other sponsors, labor that he estimates takes up 10 to 20 hours a week.

His work is essential to the daily productivity of developers around the globe. In aggregate, some projects he maintains, he told The New Stack, are responsible for between 5% and 10% of npm’s download traffic.

But spending all of his time on his open source projects, he said, would not be possible “without disrupting my life and my family and our benefits and lifestyle.”

Case in point: his COBRA health insurance benefits from Coinbase run out at the end of the year. “If I don’t find a full-time job, I have to find my own health insurance,” he said. “That’s just not a stressor that should be in anyone’s life, of course, but certainly not in the life of anyone who’s providing economic value to so many companies and economies.”

Harband is the sole maintainer of many of the projects he works on. He’s not the only developer in that situation. And that reliance on an army of largely unpaid hobbyists, he said, is dangerous and unsustainable.

“We live in capitalism, and the only way to ensure anything gets done is capital or regulation — the carrot or the stick,” he said. “The challenge is that companies are relying on work that is not incentivized by capital or forced by regulation. Nobody’s held to task, other than by market forces, if they have ship poor or insecure software.”

And, Harband added, “There is a lack of enforcement of fiduciary duty on companies that use open source software — which is basically all of them — because it’s their fiduciary duty to invest in their infrastructure. Open source software is everyone’s infrastructure, and it is wildly under-investment.”

The ‘Bus Factor’ and the ‘Boss Factor’

The world’s reliance on open source software — and the people who maintain it — is no secret. For instance, Synopsys’ 2023 open source security report, which audited more than 1,700 codebases across 17 industries, found that:

  • 96% of the codebases included open source software.
  • Just over three-quarters of the code in the codebases — 76%— was open source.
  • 91% of code bases included open source software that had had no developer activity in the past two years — a timeframe that could indicate, the report suggested, that an open source project is not being maintained at all.

This decade, there have been a number of attempts to set standards for open source security: executive orders by the Biden administration, new regulations from the European Union, and the formation of the Open Source Security Foundation (OpenSSF), and the release of its security scorecard.

In February 2022, the U.S. National Institute of Standards and Technology (NIST) released its updated Secure Software Development Framework, which provides security guidelines for developers.

But the data show that not only are open source maintainers usually unaware of current security tools and standards, like software bills of materials (SBOMs) and supply-chain levels for software artifacts (SLSA), but they are largely unpaid and, to a frightening degree, on their own.

A study released in May by Tidelift found that 60% of open source maintainers would describe themselves as “unpaid hobbyists.” And 44% of all maintainers said they are the only person maintaining a project.

“Even more concerning than the sole maintainer projects are the zero maintainer projects, of which there are a considerable amount as well that are widely used,” Donald Fischer, CEO and co-founder of Tidelift, told The New Stack. “So many organizations are just unaware because they don’t even have telemetry, they have no data or visibility into that.”

In Tidelift’s survey, 36% of maintainers said they have considered quitting their project; 22% said they already had.

It brings to mind the morbid “bus factor” — what happens to a project if a sole maintainer gets hit by a bus? (Sometimes this is called the “truck factor.” But the hypothetical tragic outcome is the same.)

An even bigger threat to continuity in open source project maintenance is the “boss factor,” according to Fischer.

The boss factor, he said, emerges when “somebody gets a new job, and so they don’t have as much time to devote to their open source projects anymore, and they kind of let them fall by the wayside.”

Succession is a thorny issue in the open source community. In a report issued by Linux Foundation Research in July, in which the researchers interviews 32 maintainers of some the top 200 critical open source projects, only 35% said their project has a strong new contributor pipeline.

Valeri Karpov has been receiving support from Tidelift for his work as chief maintainer of Mongoose, MongoDB’s object modeler, for the past five years. The Miami resident spends roughly 60 hours a month on the project, he told The New Stack.

He inherited the chief maintainer role in 2014 when he worked at MongoDB as a software engineer. The project’s previous maintainer had decided not to continue with it. Today, a junior developer who also works for Karpov’s application development company contributes to Mongoose, along with three volunteers.

For a primary maintainer who does not have the support he has, he said, there are other challenges in addition to the matter of doing work for free. For starters, there’s finding time to keep up with changes in a project’s ecosystem.

Take Mongoose for example. The tool helps build Node.js applications with MongoDB.  “JavaScript has changed a lot since I started working on Mongoose, Node js as well,” Karpov said. “When I first started working on Mongoose, [JavaScript] Promises weren’t even a core part of the language. TypeScript existed, but still wasn’t a wasn’t a big deal. All sorts of things have changed.”

And if your project becomes popular? You’ll be spending an increasing amount of time offering user support and responding to pull requests, Karpov said: “We get like dozens of inbound GitHub issues per day, Keeping up on that is took some getting used to.”

How Maintainers Can Get Paid

It would seem to be in the best interest of the global economy to pay the sprawling army of hobbyists who build and maintain open source code — compensating them for the time and headaches involved in maintaining their code, recruiting new contributors and making succession plans, and boning up on the latest language and security developments.

But the funding landscape remains patchy. Among the key avenues for financial support:

Open source program offices (OSPOs). No one knows exactly how many organizations maintain some sort of OSPO or other in-house support for their developers and engineers who contribute to open source software.

However, data from Linux Foundation Research studies shows increasing rates of OSPO adoption among public sector and educational institutions, according to Hilary Carter, senior vice president of research and communications at the foundation.

About 30% of Fortune 100 companies maintain OSPOs, according to GitHub’s 2022 Octoverse report on the state of open source software. Frequently, an enterprise will support work only on open source software that is directly related to the employer’s core business.

Why don’t more corporations support open source work? “Many organizations, especially those outside the tech sector, often do not fully understand the advantages of having an OSPO, or the strategic value of open source usage, or the benefits that come from open source contributions,” said Carter, in an email response to The New Stack’s questions.

“Their focus may be short-term in nature, or there may be concerns about intellectual property and licensing issues. Depending on the industry developers work in, highly regulated industries like financial services often have policies that prohibit any kind of open source contribution, even to projects their organizations actively use. Education and outreach are key to changing these perceptions.”

Stormy Peters, vice president of communities at GitHub, echoed the notion that many companies remain in the dark about the benefits of OSPOs.

“An OSPO can help software developers, procurement officers and legal teams understand how to select an open source license, or how non-technology staff can engage local communities in the design and development of a tool,” Peters wrote, in an email response to The New Stack’s questions.

“OSPOs create a culture shift toward more open, transparent and accountable methods of building tech tools to ensure sustainability.”

Foundations. Sometimes foundations created to house an open source project will provide financial support to the maintainers of that project. The Rust Foundation, for example, offers grants to maintainers of that popular programming language.

However, such an approach has its limits, noted Harband. “One of the huge benefits of foundations for projects is that they give you that sort of succession path,” he said. “But private foundations can’t support every project.”

In 2019, Linux Foundation introduced CommunityBridge, a project aimed at helping open source maintainers find funding. The foundation pledged to match organizational contributors up to a cumulative total of $500,000; GitHub, an inaugural supporter, donated $100,000.

But CommunityBridge has evolved into LFX Crowdfunding, part of the foundation’s collaboration portal for open source projects. “Projects receive 100% of donations and manage their own funds, which can support mentorship programs, events or other sustainability requirements,” wrote Carter in her email to TNS.

Carter also pointed to OpenSSF’s Alpha-Omega Project. Launched in February 2022, the project supports maintainers who find and fix security vulnerabilities in critical open source projects. In June, for instance, the project announced that it had funded a new security developer in residence for one year at the Python Software Foundation.

Alpha-Omega, Carter wrote, “creates a pathway for critical open source projects to receive financial support and improve the security of software supply chains.” She urged organizations that have a plan for how funds can be used or can offer funding to get in touch with OpenSSF, which is a Linux Foundation project.

Monetization platforms. Tidelift is among the platforms listed at oss.fund, a crowd-sourced and -curated catalog of sources through which open source maintainers can acquire financial support.

Fischer’s organization pays people “to do these important but sometimes tedious tasks” that open source projects need, he said. “We’ve had success attracting new maintainers to either projects where the primary maintainer doesn’t want to do those things, or in some rare cases is prohibited from doing it because of their employment agreement with somebody else.”

The rates for such work vary, depending on variables including the size of the open source project and how widely it is used. “Our most highly compensated maintainers on the platform are now making north of six figures, U.S. income, off of their Tidelift work,” Fischer said. “Which is great, because that means, basically, independent open source maintainership is now a profession.”

Among the most high-profile monetization platforms is GitHub Sponsors, which was launched in beta in 2019 and became generally available for organizations to sponsor open source workers this past April. As of April, the most recent data available, GitHub reported that Sponsors had raised more than $33 million for maintainers.

In 2022, GitHub reported, nearly 40% of sponsorship funding through the program came from organizations, including Amazon Web Services, American Express, Mercedes Benz and Shopify. In 2023, it added a tool to help sponsors fund several open source projects at once.

The introduction of the bulk-support function and other upgrades have helped GitHub sponsors see the number of organizations funding open source projects double over the past year, according to Peters, of GitHub. More than 3,500 organizations support maintainers through GitHub Sponsors, she wrote in an email to TNS.

“For far too long, developers have had to choose between their careers and open source passions — what they’re paid to do [versus] what they actually love,” Peters wrote. “Open source developers deserve to accelerate their careers at the rate they’re accelerating the world.”

LFX Crowdfunding is integrated with GitHub Sponsors, Carter told TNS in an email. She offered some guidance to help users get connected: “Community members can add and configure your sponsor button by editing a Funding.yml file in your repository’s .github folder, on the default branch.”

“Any mechanism that makes it easy for projects to find the support they need is important, and we’re excited to facilitate funding channels for existing and new initiatives,” she wrote.

Open Source as a Career Accelerator

GitHub, Peters noted, has identified an emerging trend: developers contributing to open source projects as a way to learn how to code and start careers. Two projects the company started in recent months are aimed at helping more of those early-career open source contributors gain support.

In November, GitHub launched GitHub Fund, a $10 million seed fund backed by Microsoft’s M12. The fund supported CodeSee, which maps repositories, and Novu, an open source notifications infrastructure.

“Since GitHub’s investment in CodeSee, the company has added generative AI into the platform, allowing developers to ask questions about a code base in natural language,” Peters wrote.

In April, GitHub started Accelerator, a 10-week program in which open source maintainers got a $20,000 sponsorship to work on their project; in addition, they received guidance and workshops. The project, Peters said, got 1,000 applications from maintainers in more than 20 countries; 32 participants made up the first cohort.

The participants included projects like Mockoon, a desktop API mocking application.

Poly, a Go package for engineering organisms; and Strawberry GraphQL, a Python library for creating GraphQL APIs.

The direct investment, Peters wrote, was a “game changer” for Accelerator participants. “What we found there is very little existing support for open source maintainers who want to make it full time, and building a program that spoke directly to those folks had an oversized impact.

And it’s helping to create a foundation for future funding, she added: “Based on the advice from experts, folks built a path to sustainability — whether that was bootstrapping, VC funding, grants, corporate sponsors or something else.”

Karpov offered an idea for companies that want to support their employees’ work open source projects: providing engineers with an “open source budget” along with the learning budgets that have become a common perk.

“The developers that are typically using these [open source] projects, most actively have zero budget,” he noted. “ They can’t purchase anything — and frankly, frequently, they don’t even know who to ask about purchasing these sorts of things.”

An open source budget, for instance, could be spent on things like GitHub Sponsors. In return for sponsoring an open source maintainer, Karpov said, perhaps “you get a direct communication line with them, to be like, ‘Hey, can you answer this question?’ That could make kind of developers at these big companies much more productive.”

The post Open Source Needs Maintainers. But How Can They Get Paid? appeared first on The New Stack.

]]>
Demo: Reversing a Spring4Shell Attack with Prisma Cloud https://thenewstack.io/demo-reversing-a-spring4shell-attack-with-prisma-cloud/ Fri, 01 Sep 2023 20:10:26 +0000 https://thenewstack.io/?p=22717187

When a vulnerability attacks your Kubernetes cluster, visibility matters. You need to be able to see what’s going on in

The post Demo: Reversing a Spring4Shell Attack with Prisma Cloud appeared first on The New Stack.

]]>

When a vulnerability attacks your Kubernetes cluster, visibility matters. You need to be able to see what’s going on in order to mitigate harm.

In this episode of The New Stack Demos, David Maclean, of Prisma Cloud by Palo Alto Networks, shows Alex Williams, TNS founder and publisher, how Prisma Cloud can find and handle an attack on an application built with Spring, the popular Java framework.

Prisma Cloud offers users visibility and specific information about the Spring4Shell attack, said Maclean, a senior manager for solutions architects  for the Middle East, Africa, Southern Europe and Latin America. Users, for instance, can learn “which package was it included within? Which layers of a Docker file does it reside within?”

The advantage of Prisma Cloud, which is inserted into the container runtime he added, is its ability to act as a kind of “flight recorder” of the incident, “in which to go ahead and understand what actually led up to this event. And we’ve got full visibility of not only what led up to this particular reverse shell event, but actually also any other events that are ongoing all the time.”

Check out the video to see how this cloud native application protection platform (CNAPP) works.

The post Demo: Reversing a Spring4Shell Attack with Prisma Cloud appeared first on The New Stack.

]]>
Common Cloud Misconfigurations That Lead to Data Breaches https://thenewstack.io/common-cloud-misconfigurations-that-lead-to-data-breaches/ Fri, 01 Sep 2023 13:31:51 +0000 https://thenewstack.io/?p=22717145

The cloud has become the new battleground for adversary activity: CrowdStrike observed a 95% increase in cloud exploitation from 2021

The post Common Cloud Misconfigurations That Lead to Data Breaches appeared first on The New Stack.

]]>

The cloud has become the new battleground for adversary activity: CrowdStrike observed a 95% increase in cloud exploitation from 2021 to 2022, and a 288% jump in cases involving threat actors directly targeting the cloud. Defending your cloud environment requires understanding how threat actors operate — how they’re breaking in and moving laterally, which resources they target and how they evade detection.

Cloud misconfigurations — the gaps, errors or vulnerabilities that occur when security settings are poorly chosen or neglected entirely — provide adversaries with an easy path to infiltrate the cloud. Multicloud environments are complex, and it can be difficult to tell when excessive account permissions are granted, improper public access is configured or other mistakes are made. It can also be difficult to tell when an adversary takes advantage of them.

Misconfigured settings in the cloud clear the path for adversaries to move quickly.

A breach in the cloud can expose a massive volume of sensitive information including personal data, financial records, intellectual property and trade secrets. The speed at which an adversary can move undetected through cloud environments to find and exfiltrate this data is a primary concern. Malicious actors will speed up the process of searching for and finding data of value in the cloud by using the native tools within the cloud environment, unlike an on-premises environment where they must deploy tools, making it harder for them to avoid detection. Proper cloud security is required to prevent breaches with far-ranging consequences.

So, what are the most common misconfigurations we see exploited by threat actors and how are adversaries exploiting them to get to your data?

  • Ineffective network controls: Gaps and blind spots in network access controls leave many doors open for adversaries to walk right through.
  • Unrestricted outbound access: When you have unrestricted outbound access to the internet, bad actors can take advantage of your lack of outbound restrictions and workload protection to exfiltrate data from your cloud platforms. Your cloud instances should be restricted to specific IP addresses and services to prevent adversaries from accessing and exfiltrating your data.
  • Improper public access configured: Exposing a storage bucket or a critical network service like SSH (Secure Shell Protocol), SMB (Server Message Block) or RDP (Remote Desktop Protocol) to the internet, or even a web service that was not intended to be public, can rapidly result in a cloud compromise of the server and exfiltration or deletion of sensitive data.
  • Public snapshots and images: Accidentally making a volume snapshot or machine image (template) public is rare. When it does happen, it allows opportunistic adversaries to collect sensitive data from that public image. In some cases, that data may contain passwords, keys and certificates, or API credentials leading to a larger compromise of a cloud platform.
  • Open databases, caches and storage buckets: Developers occasionally make a database or object cache public without sufficient authentication/authorization controls, exposing the entirety of the database or cache to opportunistic adversaries for data theft, destruction or tampering.
  • Neglected cloud infrastructure: You would be amazed at just how many times a cloud platform gets spun up to support a short-term need, only to be left running at the end of the exercise and neglected once the team has moved on. Neglected cloud infrastructure is not maintained by the development or security operations teams, leaving bad actors free to gain access in search of sensitive data that may have been left behind.
  • Inadequate network segmentation: Modern cloud network concepts such as network security groups make old, cumbersome practices like ACLs (access control lists) a thing of the past. But insufficient security group management practices can create an environment where adversaries can freely move from host to host and service to service, based on an implicit architectural assumption that “inside the network is safe,” and that “frontend firewalls are all that is needed.” By not taking advantage of security group features to permit only host groups that need to communicate to do so, and to block unnecessary outbound traffic, cloud defenders miss out on the chance to block the majority of breaches involving cloud-based endpoints.
  • Monitoring and alerting gaps: Centralized visibility into the logs and alerts from all services make it easier to search for anomalies.
  • Disabled logging: Effective data logging of cloud security events is imperative for the detection of malicious threat actor behavior. In many cases, however, logging has been disabled by default on a cloud platform or gets disabled to reduce the overhead of maintaining logs. If logging is disabled, there is no record of events and therefore no means of detecting potentially malicious events or actions. Logging should be enabled and managed as a best practice.
  • Missing alerts: Most cloud providers and all cloud security posture management providers provide alerts for important misconfigurations and most detect anomalous or likely malicious activities. Unfortunately, defenders often don’t have these alerts on their radar, either due to too much low-relevance information (alert fatigue) or a simple lack of connection between those alert sources and the places they look for alerts, such as a SIEM (security information and event management) tools.
  • Ineffective identity architecture: The existence of user accounts not rooted in a single identity provider that enforces limited session times and multifactor authentication (MFA), and can flag or block sign-in for irregular or high-risk signing activity, is a core contributor to cloud data breaches because the risk of stolen credential use is so high.
  • Exposed access keys: Access keys are used to interact with the cloud-service plane as a security principal. Exposed keys can be rapidly misused by unauthorized parties to steal or delete data; threat actors may also demand a ransom in exchange for a promise to not sell or leak it. While these keys can be kept confidential, albeit with some difficulty, it is better to expire them or use automatically rotated short-lived access keys in combination with restrictions on where (from what networks and IP addresses) they can be used.
  • Excessive account permissions: Most accounts (roles, services) have a limited set of normal operations and a slightly larger set of occasional operations. When they are provisioned with far greater privileges than needed and these privileges are misused by a threat actor, the “blast radius” is unnecessarily large. Excessive permissions enable lateral movement, persistence and privilege escalation, which can lead to more severe impacts of data exfiltration, destruction and code tampering.

Just about everyone has a cloud presence at this point. A lot of organizations make the decision for cost savings and flexibility without considering the security challenges that go alongside this new infrastructure. Cloud security isn’t something that security teams will understand without requisite training. Maintaining best practices in cloud security posture management will help you avoid common misconfigurations that lead to a cloud security breach.

The post Common Cloud Misconfigurations That Lead to Data Breaches appeared first on The New Stack.

]]>
API Fuzzing: What Is It and Why Should You Use It? https://thenewstack.io/api-fuzzing-what-is-it-and-why-should-you-use-it/ Fri, 01 Sep 2023 13:00:30 +0000 https://thenewstack.io/?p=22716375

API fuzzing is a technique used to test the security and reliability of an application’s APIs. Fuzzing involves sending a

The post API Fuzzing: What Is It and Why Should You Use It? appeared first on The New Stack.

]]>

API fuzzing is a technique used to test the security and reliability of an application’s APIs. Fuzzing involves sending a large number of malformed or unexpected inputs to an API to uncover potential vulnerabilities, such as input validation issues, buffer overflows, injection attacks or other types of security flaws.

The main goal of API fuzzing is to identify vulnerabilities or weaknesses in the API implementation that an attacker could exploit. By injecting unexpected or malformed data, fuzzing can trigger unexpected behaviors or expose flaws in the API’s handling of input. This helps identify potential security vulnerabilities that an attacker could leverage to compromise the system.

The Benefits of API Fuzzing

  • Security assessment. API fuzzing helps identify security vulnerabilities and weaknesses in the API implementation. By exposing these issues early in the development process, you can take corrective measures to mitigate potential risks.
  • Error handling and resilience. Fuzzing can help assess how well an API handles unexpected or malformed input. By subjecting the API to various scenarios, you can identify error-handling weaknesses and ensure the system remains stable and resilient under stress or malicious inputs.
  • Compliance and standards. Many industries and regulatory frameworks require thorough security testing to ensure compliance. API fuzzing helps you meet these requirements by actively testing the API for potential vulnerabilities and weaknesses.
  • Third-party integration. APIs are often used to integrate third-party services or components into an application. API fuzzing allows you to assess the security posture of these integrations, ensuring that they don’t introduce vulnerabilities into your system.
  • Cost-effectiveness. API fuzzing can help you identify vulnerabilities and weaknesses in a cost-effective manner. Compared to manual security testing, fuzzing can automatically generate a large number of test cases and quickly pinpoint potential issues.

Schemathesis: A Tool for API Fuzzing

Schemathesis is a specification-based testing tool designed explicitly for OpenAPI and GraphQL applications. It utilizes the robust Hypothesis framework to generate test cases based on the provided API specification.

By leveraging the OpenAPI or GraphQL schema, Schemathesis automatically generates a wide range of test scenarios, covering different combinations of inputs, edge cases and potential vulnerabilities. This approach ensures thorough and systematic testing of the API implementation.

Schemathesis focuses on testing whether an API conforms to its specification. It verifies that the API responses match the expected schema, ensuring compliance with the defined contract. This type of testing helps uncover issues related to input validation, response structures, error handling and more.

By combining the benefits of the Hypothesis framework, which provides intelligent and property-based testing, with the OpenAPI or GraphQL specification, Schemathesis simplifies the process of testing and validating APIs. It streamlines the testing workflow, increases test coverage, and assists in identifying and resolving potential issues in the API implementation.

Key Features and Benefits of the Schemathesis Tool

In addition to its support for OpenAPI and GraphQL, which allows for testing a wide range of APIs, Schemathesis offers the following advantages:

  • Positive and negative tests. With Schemathesis, you can create test cases that cover both valid and invalid inputs. This helps ensure your API handles unexpected or incorrect data gracefully.
  • Stateful testing. Schemathesis enables the generation of sequences of API requests that build on each other. This allows you to test complex scenarios, where subsequent requests depend on the results of previous ones.
  • Session replay. The tool provides a feature to store and replay test sessions. This makes it easier to investigate and debug issues by reproducing the exact sequence of API requests that led to a problem.
  • Targeted testing. You can guide the data-generation process toward specific metrics, such as response time or size. This helps uncover performance or resource usage issues and optimize your API’s behavior under different conditions.
  • Python integration. The tool seamlessly integrates with Python-built applications through native Asynchronous Server Gateway interface (ASGI) and Web Server Gateway Interface (WSGI) support. This ensures faster testing of your Python-based APIs.
  • Customization. Schemathesis offers customization options, allowing you to fine-tune data generation, API response verification and the overall testing process to fit your specific needs.
  • Continuous integration. Docker image and GitHub Action integration are supported, enabling you to run tests on every code change as part of your CI pipeline.
  • Software as a Service platform. Schemathesis offers an all-in-one SaaS platform, which eliminates the need for setup or installation. This can be beneficial if you prefer a hosted solution for your testing needs.
  • Commercial support. The open source tool is also available as a commercial and enterprise-level offering. The maker’s commercial support includes professional guidance to help you maintain an optimal testing workflow and address any issues or challenges you may encounter.

Getting Started with Schemathesis

By using advanced techniques like swarm testing and schema fuzzing, Schemathesis produces diverse and high-quality test data, enabling comprehensive testing and thorough bug detection.

The tool helps you prevent crashes, database corruption, and hangs by discovering API-breaking payloads. It also helps keep your API documentation up to date by validating examples from the OpenAPI or GraphQL schemas.

When issues occur, Schemathesis provides detailed failure reports and a single cURL command to reproduce the problem instantly, simplifying the debugging process. By thoroughly testing your API with Schemathesis, you can have increased confidence in its stability and reliability.

Schemathesis covers a wide range of test scenarios, providing comprehensive testing coverage and uncovering potential vulnerabilities. Moreover, it saves time by automating the generation of test scenarios based on the schema.

You can use Schemathesis as a CLI, Python library, GitHub app or SaaS platform, making it accessible and adaptable to your testing needs.

Installation

To install Schemathesis, you can use either Python package installation or Docker image:

Python Package Installation

Run the following command in your terminal to install Schemathesis as a Python package:

python -m pip install schemathesis


This command will install the necessary dependencies and make the st entry point available for use.

Docker Image

Alternatively, if you prefer to use the Docker image, you can pull it by executing the following command in your terminal:

docker pull schemathesis/schemathesis:stable


This command will download the Docker image of Schemathesis, allowing you to use it without installing it as a Python package.

Both installation methods provide access to the Schemathesis testing tool, and you can choose the one that suits your preferences and environment.

If you prioritize simplicity and ease of use, the Schemathesis CLI would be a suitable choice. The CLI provides a quick and straightforward way to get started with testing your API based on the schema. It offers a command-line interface where you can specify the necessary parameters, such as the API schema URL, and run the tests.

The Schemathesis CLI generates extensive tests based on the schema and reports any failures, along with reproduction instructions. This allows you to quickly identify and address issues in your API implementation.

On the other hand, if you prefer more control and customization within your codebase, the Schemathesis Python package would be a better fit. By integrating the Schemathesis library into your Python project, you have the flexibility to configure and fine-tune the testing process according to your specific needs. You can programmatically define test suites, set up custom hooks and further extend the functionality as required.

Regardless of whether you choose the CLI or the Python package, both options offer comprehensive testing capabilities and provide detailed reports of failures with reproduction instructions. You can select the option that aligns best with your workflow and preferences.

Additionally, Schemathesis offers a native GitHub app for convenient integration with pull requests, providing test result reports directly in your repositories.

How to Use Schemathesis with Different Configurations

Here are some examples of how to use the tool with different configs:

Running unit tests for all API operations with the not_a_server_error check:

export SCHEMA_URL="http://127.0.0.1:5000/api/openapi.json"

export PYTHONPATH=$(pwd)

st run $SCHEMA_URL


Selecting specific operations (POST) that have booking in their path:

st run -E booking -M POST $SCHEMA_URL


Running specific checks, such as status code conformance:

st run -c status_code_conformance $SCHEMA_URL


Including custom checks registered in the test/hooks.py module:

SCHEMATHESIS_HOOKS=test.hooks st run $SCHEMA_URL


Providing custom headers, such as an authorization token:

st run -H "Authorization: Bearer <token>" $SCHEMA_URL


Configuring Hypothesis parameters, such as running up to 1,000 examples per tested operation:

st run --hypothesis-max-examples 1000 $SCHEMA_URL


Running tests in multiple threads (eight threads, in this example):

st run -w 8 $SCHEMA_URL


Storing network log to a file for later replay:

st run --cassette-path=cassette.yaml $SCHEMA_URL


To replay requests from the stored log:

st replay cassette.yaml


Running integration tests:

st run $SCHEMA_URL


Make sure to set the SCHEMA_URL variable to the appropriate URL of your API’s schema file. You can also adjust other options and flags to customize your testing process.

Running Schemathesis from the Command Line

Using Python Package Installation

st run --checks all https://example.schemathesis.io/openapi.json


Using Docker Image

docker run schemathesis/schemathesis:stable \

run --checks all https://example.schemathesis.io/openapi.json


In both cases, the run command is used to initiate the test execution. The --checks all flag specifies that all available checks should be performed during the testing process. Replace https://example.schemathesis.io/openapi.json with the URL of your OpenAPI schema.

Choose either the Python package installation or the Docker image, based on your preferred method of installation and execution.

The post API Fuzzing: What Is It and Why Should You Use It? appeared first on The New Stack.

]]>
More Lessons from Hackers: How IT Can Do Better https://thenewstack.io/more-lessons-from-hackers-how-it-can-do-better/ Fri, 01 Sep 2023 10:00:34 +0000 https://thenewstack.io/?p=22716936

Kelly Shortridge is an advocate for better resiliency in IT systems. The author of Security Chaos Engineering: Sustaining Resilience in

The post More Lessons from Hackers: How IT Can Do Better appeared first on The New Stack.

]]>

Kelly Shortridge is an advocate for better resiliency in IT systems. The author of Security Chaos Engineering: Sustaining Resilience in Software and Systems and a senior principal engineer at Fastly in the office of the CTO spoke at this year’s Black Hat conference. She explained why attackers are more resilient and what IT organizations can do to become more resilient and responsive.

Recently, The New Stack looked at Shortridge’s recommendations to leverage Infrastructure as Code and the Continuous Integration/Continuous Development pipeline to improve and become more resilient. In this follow-up post, we’ll look at the final lessons IT can take from attackers to improve their security posture:

  • Design-based defense
  • Systems thinking
  • Measuring tangible and actionable success

Design-Based Defense: Modularity and Isolation

“The solutions that actually help with this aren’t the ones we usually consider in cybersecurity or at least traditional cybersecurity. We want to design solutions that encourage the nimbleness that we envy in attackers, we want to design solutions that help us become the best ever-evolving defenders,” she said. “The less dependent it is on human behavior, the better it is.”

From Kelly Shortridge’s Black Hat 2023 presentation

She created the ice cream cone hierarchy of security solutions to demonstrate how organizations should prioritize security and resilience mitigations. As an example of a design-based solution, she pointed to Kelly Long’s push to use HTTPS as the default for Tumblr’s user blogs.

“That’s a fantastic example of a design-based solution,” Shortridge said. “She knew that security should be invisible to the end users, so we shouldn’t put the burden of security on end users who aren’t technical. I think she’s really ahead of her time.”

Instead of offloading that work onto end users and peers, IT should try to automate security and use design-based defense when possible. That means deliberately designing in modularity. Modularity allows structurally or functionally distinct parts to retain autonomy during periods of stress and allows for easier recovery from loss, Shortridge explained. A queue, for instance, adds a buffer, and message brokers can replay and make return code non-blocking.

“Message brokers and queues provide a standardization for passing data around the system. It also provides a centralized view into it,” she said. “What you get here is visibility, you can see where data is flowing in your system.”

Modularity also supports an airlock approach with systems so that if an attack gets through, it won’t necessarily bring your system down. She demonstrated an air gap between two services talking to each other who a queue in between. The queue allows you to take the service offline and fix it, while service A continues to send requests, which the queue handles, allowing the service to stay available and functioning until the fix is put into place.

“Modularity, when done right, minimizes incident impact because it keeps things separate,” she said. “Modularity allows us to break things down into smaller components and that is much harder for attackers not only to persist if it’s ephemeral, it makes it harder for attackers to move laterally and gain widespread access in our system.”

Mozilla and UC San Diego have used this approach and have reported they no longer have to worry about zero day attacks because these sandboxes of components give them time to roll out a reliable fix without taking the system down, she added.

Systems Thinking

Repeatedly, the speaker at Black Hat said attackers are “system thinkers.” Shortridge reiterated this in her talk.

“Attackers thinking in systems, while defenders thinking in components, [which is] especially apparent when I talk to security teams, and thinking about how traffic and data flows between surfaces that’s often overlooked,” Shortridge said. “We’re so focused as an industry on ingress and egress that we miss how services talk to each other. And by the way, attackers love that we missed this.”

Attackers tend to focus on one thing: Your assumptions. You assume parsing the string will always be fast or the messenger set that shows up on this course will always post authentication or an alert will always fire when the malicious executable appears. But will it really? Attackers will test your assumptions and then keep looking to see if you’re just a little wrong or really wrong, she said.

“We want to be fast, ever-evolving defenders, we want to refine our mental models continuously rather than waiting for attackers to exploit the difference between our mental models and reality,” she said. “Decision trees and resilient stress testing can help us do just that.”

Decision trees can help find the gaps in your security mitigations, she said, and force IT to examine the “this will always be true” assumptions before attackers do. Reliance stress tests — called chaos engineering in security circles — build upon decision trees, helping to identify where systems can fail.

“Chaos engineering seeks to understand how disruptions impact the entire system’s ability to recover and adapt,” she said. “It appreciates the inherent interactivity in the system across time and space. So it means we’re stress testing at the system level, not the component level as you usually do. It forces you to adopt a systems mindset.”

Measuring Tangible and Actionable Success

Attackers have another advantage — they can measure success and receive immediate feedback on their metrics. Attacker metrics are straightforward: Do they have access? How much access do they have? Can they accomplish their goal? Security vendors, by contrast, often struggle to create lucid, actionable metrics — especially metrics that offer immediate feedback, she said.

”We want to be fast ever-evolving defenders, we need system signals that can inspire quick action, we need system signals that can inform change,” she said. “It turns out reliability signals are friends here, they’re really useful for security.”

IT security should learn and use the organization’s observability stack, she advised. They can even help detect the presence of attackers, she added.

“Again, attackers monitor the system they’re compromising to make sure they’re not tipping off defenders, or tripping over any sort of alert thresholds. So in the resilience revolution, we want to collect system signals, too, so we can be fast and ever-evolving right back,” she said.

The post More Lessons from Hackers: How IT Can Do Better appeared first on The New Stack.

]]>
How to Give Developers Cloud Security Tools They’ll Love https://thenewstack.io/how-to-give-developers-cloud-security-tools-theyll-love/ Thu, 31 Aug 2023 15:10:43 +0000 https://thenewstack.io/?p=22717100

There are few better ways to make developers resent cybersecurity than to impose security tools on them that get in

The post How to Give Developers Cloud Security Tools They’ll Love appeared first on The New Stack.

]]>

There are few better ways to make developers resent cybersecurity than to impose security tools on them that get in the way of development operations.

After all, although many developers recognize the importance of securing applications and the environments that host them, their main priority as software engineers is to build software, not to secure it. If you burden them with security tools that hamper their ability to write code efficiently, you’re likely to get resistance against the solutions — and rampant security risks because your developers may not take the tools seriously or use them to maximum effect.

Fortunately, that doesn’t have to be the case. There are ways to square the need for rigorous security tools with developers’ desire for efficiency and flexibility in their own work. Here are some tips to help you choose the right security tools and features to ensure that security solutions effectively mitigate risks without burdening developers.

What to Look for in Modern Cloud Security Tools

There are many types of security tools out there, each designed to protect a specific type of environment, a certain stage of the software delivery life cycle or against a certain type of risk. You might use “shift left” security tools to detect security risks early in the software delivery pipeline, for example, while relying on cloud security posture management (CSPM) and cloud identity and entitlement management (CIEM) solutions to detect and manage risks within the cloud environments that host applications.

You could leverage all of these features via an integrated cloud native application protection platform (CNAPP) solution, or you could implement them individually, using separate tools for each one.

However, regardless of the type of security tools you need to deploy or types of risks you’re trying to manage, your solutions should provide a few key benefits to ensure they don’t get in the way of developer productivity.

Context-Aware Security

Context-aware security is the use of contextual information to assess whether a risk exists in the first place, and if so, the potential severity of that risk. It’s different from a more-generic, blunter approach to security wherein all potential risks are treated the same, regardless of context.

The key benefit of context-aware security for developers is that it’s a way of balancing security requirements with usability and productivity. Based on the context of each situation, your security tools can evaluate how rigorously to deploy protections that may slow down development operations.

For example, imagine that you’ve configured multifactor authentication (MFA) by default for the source code management (SCM) system that your developers use. In general, requiring MFA to access source code is a best practice from a security perspective because it reduces the risk of unauthorized users being able to inject malicious code or dependencies into your repositories. However, having to enter multiple login factors every time developers want to push code to the SCM or view its status can slow down operations.

To provide a healthy balance between risk and productivity in this case, you could deploy a context-aware security platform that requires MFA by default when accessing the SCM but only requires one login factor when a developer connects from the same IP address and during the same time window from which he or she has previously connected. Based on contextual information, lighter security protections can be deployed in some circumstances so that developers can work faster.

Security Integrations

The more security tools you require developers to integrate with their own tooling, the harder their lives will be. Not only will the initial setup take a long time, but they’ll also be stuck having to update integrations every time they update their own tools.

To mitigate this challenge, look for security platforms that offer a wide selection of out-of-the-box integrations. Native integrations mean that developers can connect security tooling to their own tools quickly and easily, and that updates can happen automatically. It’s another way to ensure that development operations are secure, but without hampering developer efficiency or experience.

Comprehensive Protection

The more security features and protections you can deploy through a single platform, the fewer security tools and processes your developers will have to contend with to secure their own tools and resources.

This is the main reason why choosing a consolidated, all-in-one cloud security platform leads to a better developer experience. It not only simplifies tool deployment, but also gives developers a one-stop solution for reporting, managing and remediating risks. Instead of toggling through different tools to manage different types of security challenges, they can do it all from a single location, and then get back to their main job — development.

Getting Developers on Board with Security

At its worst, security tools are the bane of developers’ existence. It gets in their way and slows them down, and they treat it as a burden they have to bear.

Well-designed, well-implemented security tools do the opposite. By using strategies such as context-aware security, broad integrations and comprehensive, all-in-one cloud security platforms, organizations can deploy the protections they need to keep IT resources secure while simultaneously keeping developers happy and productive.

Interested in strengthening your cloud security posture? The Orca Cloud Security Platform offers complete visibility and prioritized alerts for potential threats across your entire cloud estate. Sign up for a free cloud risk assessment or request a demo today to learn more.

The post How to Give Developers Cloud Security Tools They’ll Love appeared first on The New Stack.

]]>