Market opportunities in MENA for streaming, advertising, and sports
With a subscription video on demand (SVOD) services market expected to surpass $1.2 billion by the end of this year, it’s no wonder that entertainment and media companies are looking at the Middle East and North Africa (MENA) with increased focus. While the region presents unique opportunities for expansion and greater content monetization, reaching this diverse and often fragmented audience presents distinct challenges.
At our 2024 MENA Monetization Summit in Dubai, industry leaders discussed the innovative strategies they’ve used to thrive in this dynamic market. Read on to learn the drivers behind their success and gain strategic insights for effective content monetization in this rapidly evolving region.
MENA’s streaming and advertising market: highlights to know
Opportunities in MENA are evolving rapidly, driven by a young, tech-savvy population and increasing digital penetration. Consider the following statistics from global analysts Omdia:
MENA’s SVOD services market generated over $1 billion in revenues in 2023 and is expected to surpass $1.2 billion in 2024.
Online video advertising in MENA is expected to grow by 67% in revenue by 2028 while online video subscription is expected to grow by 19%.
Already, the Free Ad-Supported TV (FAST) market in MENA has topped $7 million — with the potential to quadruple in the next five years.
Saudi Arabia sees the highest consumption of YouTube videos globally.
So, how can businesses capitalize on these burgeoning markets? There are a few key considerations when evaluating opportunities for content monetization in MENA:
First, underserved sports content is a great avenue to explore. The popularity of sports such as cricket, rugby, mixed martial arts, and fighting sports in the region opens up significant opportunities to stake out new territory. Since these sports are not as heavily contested by major players, new market entrants can quickly and effectively carve out a niche.
When expanding into MENA, localized content will be particularly important for success. Content tailored to local tastes, cultural norms, and preferences is crucial. This means finding opportunities to produce and broadcast local sports, creating region-specific reality shows, and emphasizing local celebrities and events. Platforms that offer a mix of local productions and international content stand a better chance of engaging the audience.
Similar to findings from other parts of the world, FAST channels are becoming increasingly popular in MENA, providing an alternative to traditional pay-TV. These channels attract large audiences by offering free content supported by advertising. This CTS webinar dives deeper into FAST channel technology.
FAST channel revenues in MENA reached $7.2 million in 2023 and are projected to quadruple in the next 5 years.
Success stories from MENA
Several companies are successfully navigating the MENA market challenges by leveraging specific strategies and focusing on underserved segments. STARZ PLAY Arabia and Shahid together make up approximately 40% of the total over-the-top (OTT) services market in the region according to Q4 2023 Omdia data.
Shahid and STARZ PLAY lead the MENA streaming video market.
STARZ PLAY: With over 3.5 million subscribers and growing, this highly successful SVOD service has seen tremendous success by focusing on underserved sports and licensed Hollywood content. Here’s the strategy at a glance:
Securing sports rights including UFC, Cricket World Cup, ICC tournaments
Enhancing the user experience with sport-specific UI features
Shahid, part of the MBC Group, has established itself as a leading platform in the MENA region by:
Leveraging its extensive library of premium Arabic content
Growing both its ad-supported video on demand (AVOD) and SVOD services in tandem but with a heavier emphasis on advertising
Key trends to watch across the region
Shahid, STARZ PLAY, AWS, FreeWheel, and other media companies that have seen considerable success in MENA have tapped into key trends in the region.
Shifting toward hybrid models: In the Middle East, pay-TV is still important, but online advertising is growing rapidly. Giant entertainment companies such as Netflix and Amazon Prime are exploring ad-supported content. The expected growth in subscriptions combined with the increasing importance of advertising revenue in the region highlight this trend.
Importance of data and personalization: AI is revolutionizing content monetization in MENA by enhancing personalization and operational efficiency. AI is helping content providers with deeper understanding in user behavior and preferences, allowing for highly targeted and contextual advertising. By using AI to analyze vast amounts of data, companies can predict churn behaviors, personalize content recommendations, and optimize advertising strategies. Employing AI in content production processes, such as automated subtitling in multiple languages, is becoming a cost-effective way to make content more accessible and widen reach.
Rise of sports: The growing popularity of esports and niche sports presents a lucrative opportunity. The addition of new sports in the Olympic program could lead to increased engagement and monetization.
Looking to grow in MENA? Here’s your roadmap.
To thrive in the competitive MENA market, industry leaders recommend adopting the following strategies:
Identify and focus on areas underserved by major players, such as specific sports or localized content. This could mean using unique entertainment formats, such as live sports events, to attract audiences, foster brand loyalty, and maximize monetization potential.
Leverage partnerships and form strategic alliances with local telecom operators, device manufacturers, and opportunities for managed channel origination to boost visibility and distribution opportunities.
Invest in technology and infrastructure to ensure robust technology and infrastructure to handle high concurrent user traffic. In particular, cloud services and scalable solutions are essential for maintaining a seamless user experience and meeting the expectations of the region’s audiences.
Enhance fan engagement by leveraging AI and data analytics to create personalized and interactive experiences. It’s a great way to boost real-time engagement during live events, fantasy sports integration, key moments and highlights, and tailored content recommendations.
Diversify revenue streams and combine subscription services with advertising. Explore opportunities in branded content and sponsorships to maximize revenue potential. The future of content monetization lies in hybrid models that combine subscriptions with advertising. The expected growth in the number of subscriptions and the increasing importance of advertising revenue highlight this trend.
There is immense potential for success in MENA provided that companies can navigate its complexities and leverage its unique opportunities. Understanding local market dynamics and tailoring strategies accordingly will be key to capitalizing on the opportunities at hand.
Looking for a trusted partner to help support your strategies? Contact us today.
Read More
Bee sharp: putting GenAI to work for asset insights with Beekeeper AI™
Artificial intelligence (AI) and music are a lot alike. When you have the right components together, like patterns in melodies and rhythms, music can be personal and inspire creativity. In my experience having worked on projects that developed AI for IT and security teams, data can help recognize patterns from day-to-day activities and frustrations that can be enhanced or automated.
I started working in AI technology development nearly a decade ago. I loved the overlaps between music and programming. Both begin with basic rules and theory, but it is the human element that brings AI (and music) to life.
Recently, we launched BeeKeeper AI™ from DataBee, a generative AI (genAI) tool that uses patent-pending entity resolution technology to find and validate asset and device ownership. Inspired by our own internal cybersecurity and operations teams struggles of chasing down ownership, which sometimes added up to 20+ asset owner reassignments, we knew there was a better way forward. Through integrations with enterprise chat clients like Teams, BeeKeeper AI uses your data to speak to your end users, replacing the otherwise arduously manual process of confirming or redirecting asset ownership.
What’s the buzz about BeeKeeper AI from DataBee?
Much like how a good song metaphorically speaks to the soul, BeeKeeper AI’s innovative genAI approach is tuned to leverage ownership confidence scores that prompt it to proactively reach out to end users. Now, IT admins and operations teams don’t have to spend hours each day reaching out to asset owners who often become frustrated over having their day interrupted. Further, by using BeeKeeper AI for ‘filling in the blanks’ of unclaimed or newly discovered assets, you have an improved dataset of who to reach out to when security vulnerabilities and compliance gaps appear.
BeeKeeper AI, a part of DataBee for Security Hygiene and Security Threats, uses an entity resolution technology to identify potential owners for unclaimed assets and devices based on a few factors such as comparing authentication logs.
BeeKeeper AI is developed with a large language model (LLM) that features strict guardrails to keep conversations on track and hallucinations at bay when engaging these potential owners. This means that potential asset owners can simply respond “yes” or suggest someone else and move on with their day.
Once users respond, BeeKeeper AI can do the rest – including looking for other potential owners, updating the DataBee platform, and even updating the CMDB, sharing its learnings with other tools.
Automatic updates to improve efficiency and collaboration
Most IT admins and operations teams heave a sigh every time they have to manually update their asset inventories. If you’ve been using spreadsheets to maintain a running, cross-referenced list of unclaimed devices and potential owners, then you’re singing the song of nearly every IT department globally.
This is where BeeKeeper AI harmonizes with the rest of your objectives. When BeeKeeper AI automatically updates the DataBee platform, everyone across the different teams have a shared source of data, including:
IT
Operations
Information security
Compliance
Unknown or orphaned assets are everyone’s responsibility as they can become a potential entry point for security incidents or create compliance gaps. BeeKeeper AI can even give you insights from its own activity, allowing you to run user engagement reports to quantify issues like:
Uncooperative users
Total users contacted and their responses
Processed assets, like validated and denied assets
Since it automatically updates the DataBee platform, BeeKeeper AI makes collaboration across these different teams easier by ensuring that they all have the same access to cleaner and more complete user and asset information that has business context woven in.
Responsible AI for security data
AI is a hot topic, but not all AI is the same. At DataBee, we believe in responsible AI with proper guardrails around the technology’s use and output.
As security professionals, we understand that security data can contain sensitive information about your people and your infrastructure. BeeKeeper AI starts from your clean, optimized DataBee dataset and works within your contained environment. Unique to each organization’s data, BeeKeeper AI’s guardrails keep sensitive data from leakage.
This is why BeeKeeper AI sticks to what it knows, even when someone tries to take it off task. Our chatbot isn’t easily distracted and refocuses attempts to engage back to its sole purpose - identifying and finding the right asset owners.
Making honey out of your data with BeeKeeper AI
BeeKeeper AI leverages your security data to proactively reach out to users and verify whether they own assets. With DataBee, you can turn your security data into analytics-ready datasets to get insights faster. Let BeeKeeper AI manage your hive so you can focus on making honey out of your data.
If you’re ready to reduce manual, time-consuming, collaboration-inhibiting processes, request a custom demo to see how DataBee for Security Hygiene can help you sing a sweeter tune.
Read More
DataBee: Who do you think you are?
2024 has been a big “events” year for DataBee as we’ve strived to raise awareness of the new business and the DataBee Hive™ security, risk and compliance data fabric platform. We’ve participated in events across North America and EMEA including Black Hat USA, the Gartner Security & Risk Management Summits, FS-ISAC Summit, Snowflake Data Cloud Summit and AWS re:Inforce, and of course, the RSA Conference. At RSA, we introduced to the community our sweet (haha) and funny life-size bee mascot, who ended up being a big hit among humans and canines alike.
Participation in these events has been illuminating on many important fronts. For the DataBee “hive” it’s been invaluable, not only for the conversations and insights we gain from real users across the industry, but also for the feedback we receive as we share the story of DataBee’s creation and how it was inspired by the security data fabric that Comcast’s Global CISO, Noopur Davis, and her team developed. In general, we’ve been thrilled with the response that DataBee has received, but consistently, there’s one piece of attendee feedback that really gives us pause:
“Why would Comcast Technology Solutions enter the cybersecurity solutions space?”
In other words, “what the heck is Comcast doing here?”
This statement makes it pretty clear: Comcast might be synonymous with broadband, video, media and entertainment services and experiences, but may be less associated with cybersecurity.
But it should be. While Comcast and Xfinity may not be immediately associated with cybersecurity, Comcast Business, a $10 billion business within Comcast, has been delivering advanced cybersecurity solutions to businesses of all sizes since 2019. With our friends at Comcast Business, the DataBee team is working hard to change perceptions and increase awareness of Comcast’s rich history of innovation in cybersecurity.
Let’s take a quick look at some of the reasons why the Comcast name should be synonymous with cybersecurity
Comcast Business
Comcast Business is committed to helping organizations adopt a cybersecurity posture that meets the diverse and complex needs of today’s cybersecurity environment. Comcast Business’ comprehensive solutions portfolio is specifically engineered to tackle the multifaceted challenges of the modern digital landscape. With advanced capabilities ranging from real-time threat detection and response, Comcast Business solutions help protect businesses. Whether through Unified Threat Management systems that simplify security operations, cloud-based solutions that provide flexible defenses, or DDoS mitigation services that help preserve operational continuity, Comcast Business is a trusted partner in cybersecurity. Comcast Business provides the depth, effectiveness, and expertise necessary to enhance enterprise security posture through:
SD-WAN with Advanced Security
Connect users to applications securely both onsite and in the cloud
Unified Threat Management (UTM)
UTM solutions provide an integrated security platform that combines firewall, antivirus, intrusion prevention, and web filtering to simplify management and enhance visibility across the network.
DDoS Mitigation
Security for disruption caused by Distributed Denial of Service attacks by helping to identify and block anomalous spikes in traffic while allowing for desired functionality of your services.
Secure Access Service Edge (SASE)
Integrating networking and security into a unified cloud-delivered service model, our SASE framework supports dynamic secure access needs of organizations, facilitating secure and efficient connectivity for remote and mobile workers.
Endpoint Detection and Response (EDR)
Help safeguard devices connected to your enterprise network, using AI to detect, investigate, remove, and remediate malware, phishing, and ransomware
Managed Detection and Response (MDR)
Extend EDR capabilities to the entire network and detect advanced threats, backed up with 24/7 monitoring by a team of cybersecurity experts.
Vulnerability Scanning and Management
Helps identify and manage security weaknesses in the network and software systems, a proactive approach that helps protect potential entry points for threat actors.
Comcast Ventures
Did you know that Comcast has a venture capital group that backs early-to-growth stage startups that are transforming sectors like cybersecurity, AI, healthcare, and more?
Some of the innovative cybersecurity, data and AI-specific companies that Comcast Ventures has invested in include:
BigID
SafeBase
HYPR
Resemble AI
Bitsight
Uptycs
Recently, cybersecurity investment and advisory firm NightDragon announced a strategic partnership with Comcast Technology Solutions (CTS) and DataBee that also included Comcast Ventures. As a result of this strategic partnership, CTS, Comcast Ventures and DataBee will gain valuable exposure to the new innovations coming from NightDragon companies.
Comcast Cybersecurity
As I write this, Comcast Corporation is ranked 33 on the Fortune 500 list, so – as you might guess – it has an expansive internal cybersecurity organization. With $121 billion+ in annual revenues, over 180,000 employees around the globe, and a huge ecosystem of consumers and business customers and partners, Comcast takes its security obligations very seriously.
Our cyber professionals collectively hold and are awarded multiple patents each year. We lead standards bodies, and we participate and provide leadership in multiple policy forums. Our colleagues contribute to Open-Source communities where we share our security innovations. We are an integral part of the global community of cybersecurity practitioners – we present at conferences, learn from our peers, hold multiple certifications, and publish in various journals. We are a contributing member of the Communications ISAC, and the CISA Joint Cyber Defense Collaborative. A sampling of internal research and development efforts within Comcast’s cybersecurity organization include:
One-time secure secrets sharing
Security data fabric (Note: the inspiration for DataBee®)
Anomaly detection
AI-based secrets detection in code
AI-based static code analysis for privacy
Crypto-agility risk assessment
Machine-assisted security threat modeling
Scoping of threats against AI/ML apps
Persona-based privacy threat modeling
PKI and token management systems
Certificate lifecycle management and contribution to industry IoT stock
R&D for BluVector Network Detection and Response (NDR) product
The Comcast Cyber Security (CCS) Research team, “conducts original applied and fundamental cybersecurity research”. Selected projects that the team is working on include research on security and human behavior, security by design, and emerging technologies such as post quantum cryptography. CCS works with technology teams across Comcast to identify and explore security gaps in the broader cyber ecosystem.
The Comcast Cybersecurity team’s work developing and implementing a security data fabric platform was the inspiration for what has become DataBee. Although the DataBee team has architected and built its commercial DataBee Hive™ security, risk and compliance data fabric platform from “scratch” (so to speak), it was Comcast’s internal platform – and the great results that it has, and continues, to deliver – that proved such a solution could be a game-changer, especially for large, complex organizations. While DataBee Hive has been designed to address the needs and scale of any type of enterprise or IT architecture, we were fortunate to be able to tap into the learnings that came from the years and countless person hours of development that went into building Comcast’s internal security data fabric platform, and then operating it at scale.
DataBee Cybersecurity Suite
Besides being home to the DataBee Hive security data fabric platform and products, it’s worth noting that the DataBee business unit of Comcast Technology Solutions is also home to BluVector, an on-premises network detection and response (NDR) platform. Comcast acquired BluVector in 2019, which was purpose-built to protect critical government and enterprise networks. BluVector continues to deliver AI-powered NDR for visibility across network, devices, users, files, and data to discover and hunt skilled and motivated threats.
Comcast and cybersecurity? Of course.
So, the next time you come across DataBee, from Comcast Technology Solutions, and you think to yourself “why is Comcast in the enterprise security market with DataBee?!” – think again.
From small and mid-size organizations to large enterprises and government agencies; and from managed services to products and solutions; and from on-premises to cloud-native… Comcast’s complete cybersecurity “portfolio” covers the gamut.
Want to connect with someone to determine what’s right for your organization? Contact us, and in “Comments”, let us know if you’d like to evaluate solutions from both DataBee and Comcast Business. We’ll look forward to exploring options with you!
Read More
Compliance Takes a Village: Celebrating National Compliance Officer Day
If the proverb is, it takes a village to raise a child, then the corollary in the business world is that it takes a village to get compliance right. And in this analogy, compliance officers are the mayor of this village. Compliance officers schedule audits, coordinate activities, oversee processes, and manage documentation. They are the often-unsung heroes whose work acts as the foundation of your customers’ trust, helping you achieve certifications and mitigate risk.
While your red teamers and defenders get visibility because they sit at the frontlines, your compliance team members are strategizing and carving paths to reduce risk and enable programs. For this National Compliance Officer Day, we salute these mayors of the compliance village in their own words.
Feeling Gratitude
There is a great amount of pride when compliance officers are able to help you build trust with your customers, but there is also an immense amount of gratitude from the compliance teams for the internal relationships built within the enterprise
Yasmine Abdillahi, Executive Director of Security Risk and Compliance and Business Information Security Officer at Comcast, expressed gratitude for executive leader Sudhanshu Kairab whose ability to grasp the core business fundamentals have allowed Comcast to implement robust compliance frameworks that mitigate risks and support growth and trust.
“[Sudhanshu] consistently demonstrates a keen awareness of industry trends, enabling us to stay ahead of emerging challenges and opportunities. His ability to sustain and nurture a strong network, both internally and externally, has proven invaluable in fostering collaboration and ensuring we remain at the forefront of GRC best practices. His multifaceted approach to leadership has not only strengthened our risk posture but has also positioned our GRC function as a key driver of innovation and business growth.”
Compliance professionals rely on their strategic internal business partners to succeed. When enterprise leaders empower the GRC function, compliance and risk managers can blossom into their best business enabling selves.
In return, compliance leaders allow the enterprise to provide customers with the assurance they need. In today’s “trust but verify” world, customers trust the business when the compliance function can verify the enterprise security posture.
Collaboration, Communication, and Education
At its core, your compliance team acts as the communications glue that binds together the various cybersecurity functions.
For Tom Schneider, who is a part of the DataBee team as a Cybersecurity GRC Professional Services Consultant, communication has been essential to his career. When working to achieve compliance with a control, communicating clearly and specifically is critical, especially when cybersecurity is not someone’s main responsibility. Clear communication educates both sides of the compliance equation.
“Throughout my career, I have learned from the many people I’ve worked with. They have included management, internal and external customers, and auditors. I’ve learned from coworkers that were experts in some specific technology or process, such as vulnerability management or identity management, as well as from people on the business side and how things appear from their perspective.”
GRC’s cross-functional nature makes compliance leaders some of the enterprise’s most impactful teachers and learners. Compliance officers collaborate across different functions - security, IT, and senior leadership. As they learn from their internal partners, they, in turn, educate others.
Compliance officers are so much more than the controls they document and the checklists they review. They facilitate collaboration because they can communicate needs and build a shared language.
Compliance Officers: Keeping It All Together
A compliance officer’s role in your organization goes far beyond their job descriptions. They are cross-functional facilitators, mentors, learners, leaders, enablers, and reviewers. They are the ones who double check the organization’s cybersecurity work. Every day, they work quietly in the background, but for one day every year, we have the opportunity to let them know how important they are to the business.
DataBee from Comcast Technology Solutions gives your compliance officer a way to keep their compliance and business data together so they can communicate more effectively and efficiently. Our security data fabric empowers all three lines of defense - operational managers, risk management, and internal audit - so they can leave behind spreadsheets and point-in-time compliance reporting relics of the past. By leveraging the full power of your organization’s data, compliance officers can implement continuous controls monitoring (CCM) with accurate compliance dashboard and reports for measuring risk and reviewing controls’ effectiveness.
From our Comcast compliance team to yours, thank you for all you do. We see you and appreciate you - today and every day.
Read More
Best practices for PCI DSS compliance...and how DataBee for CCM helps
For planning compliance with the Payment Card Industry Data Security Standard (PCI DSS), the PCI Security Standards Council (SSC) supplies a document that provides excellent foundational advice for both overall cybersecurity, and PCI DSS compliance. Organizations may already be aware of it, but regardless, it is a useful resource. And, it is interesting to read with Continuous Controls Monitoring (CCM) in mind.
The document lists 10 Recommendations for best practices which are useful, not just for PCI DSS compliance, but for overall security and compliance with organizational policies as well as frameworks and regulations to which the entity is subject. The best practices place a strong emphasis on ongoing, continuous compliance. That is, for organizations “to protect themselves and their customers from potential losses or damages resulting from a data breach, they must strive for ways to maintain a continuous state of compliance throughout the year rather than simply seeking point-in-time validation.”
While the immediate goal may be to attain a compliant Report on Compliance (ROC), that immediate goal, and the longer-term viability of the security program, are aided by establishing a program around continuous compliance and the ability to measure it.
Here are the SSC’s 10 Best Practices for Maintaining PCI DSS Compliance:
Develop and Maintain a Sustainable Security Program
Develop Program, Policy, and Procedures
Develop Performance Metrics to Measure Success
Assign Ownership for Coordinating Security Activities
Emphasize Security and Risk Management to Attain and Maintain Compliance
Continuously Monitor Security Controls
Detect and Respond to Security Control Failures
Maintain Security Awareness
Monitoring Compliance of Third-Party Service Providers
Evolve the Compliance Program to Address Changes
Some detail around the 10
The first recommendation, “Develop and Maintain a Sustainable Security Program” is short, but notes that, “Any cardholder data not deemed critical to business functions should be removed from the environment in accordance with the organization’s data-retention policies… In addition, organizations should evaluate business and operating procedures for alternatives to retaining cardholder data.” Outsourcing the processing of cardholder data to entities that specialize in this work is an option that many organizations take. When that is not a viable option, minimizing the amount of data collected, and securely deleting it as specified in the organization’s data retention policy is the next best option.
“Develop Program, Policy, and Procedures” is the second recommendation. Along with developing and maintaining these documents, accountability must be assigned “to ensure the organization's sustainable compliance.” Additionally, PCI DSS v4.0 has a requirement under each of the twelve principal requirements stating that “Roles and responsibilities for performing activities” for each principal requirement “are documented, assigned, and understood.” If this role does not already exist, something for organizations to consider would be designating a “compliance champion” for each business unit. The compliance champions could work with their management to assume accountability for the control compliance for assets and staff assigned to the business unit.
“Develop Performance Metrics to Measure Success” follows. This recommendation includes “Implementation metrics” (which measure the degree to which a control has been implemented, and are usually described as percentages), and “Efficiency and Effectiveness Measures” (which evaluate attributes such as completeness, consistency, and timeliness). These metrics show if a control has been implemented over the expected range of the organization’s assets, if it has been implemented consistently, and is being executed when expected. These metrics play a key role in assessing compliance in a continuous way.
Measurement of implementation metrics and effectiveness metrics for completeness and consistency are core components of DataBee for CCM. For example, in the case of Asset Management, users can see if assets in scope for PCI DSS are flagged as being in scope correctly, if the asset owner is accurate, and if other data points such as physical location are present. The ability to see continuously refreshed data on a CCM dashboard, as opposed to having to create a point in time report, or have the knowledge to access this data through a product specific portal, makes it practical for teams to see accurate metrics in an efficient way.
The fourth recommendation is to “Assign Ownership for Coordinating Security Activities.” An “individual responsible for compliance (a Compliance Manager)” is the main point of this recommendation. However, the recommendation notes that Compliance Manager should be “given adequate funding and resources… and granted the proper authority to effectively organize and allocate such resources.” The effective organization of resources could include delegating tasks throughout the organization to managers over units within the larger organization. This recommendation ends by noting that the organization must ensure that “the goals and objectives of its compliance program are consistently achieved despite changes in program ownership (i.e., employee turnover, change of management, organization merger, re-organization, etc.). Best practices include proper knowledge transfer, documentation of existing controls and the associated responsible individual(s) or team(s).”
Using the DataBee for CCM dashboards to assign accountability for assets and staff to the appropriate business units helps with this recommendation.
It clarifies the delegation of responsibility for assets and staff to the business unit’s management.
Furthermore, it would help drive the effective achievement of objectives of the compliance program during transitions in the Compliance Manager role.
Delegation of control compliance to the business unit’s management would enable them to continue with their tasks while a new Compliance Manager is hired and during the time needed for the Compliance Manager to adjust to their role.
“Emphasize Security and Risk Management to Attain and Maintain Compliance,” the fifth recommendation asserts that “PCI DSS provides a minimum set of security requirements for protecting payment card account data…,” and that “Compliance with industry standards or regulations does not inherently equate to better security.”
This point cannot be emphasized highly enough: “A more effective approach is to focus on building a culture of security and protecting an organization’s information assets and IT infrastructure and allow compliance to be achieved as a consequence.” The ongoing measurement of control implementation by CCM supports a culture of security. Organizations can use the information provided by DataBee for CCM to not only enable continuous reporting, but through it to support continuous remediation of control failures.
The next recommendation, “Continuously Monitor Security Controls,” describes how “the use of automation in both security management and security-control monitoring can provide a tremendous benefit to organizations in terms of simplifying monitoring processes, enhancing continuous monitoring capabilities, and minimizing costs while improving the reliability of security controls and security-related information.”
Ongoing monitoring of data that is frequently refreshed can be a core component for ongoing compliance. Ultimately, implementing a continuous controls monitoring program will help reduce extra workload as the PCI DSS assessment date approaches. DataBee for CCM is a tool that supports the necessary continuous monitoring.
The seventh recommendation, “Detect and Respond to Security Control Failures,” applies to two situations:
controls which have failed, but with no detectable consequences, and
control failures that escalate to security incidents.
PCI SSC notes that, “The longer it takes to detect and respond to a failure, the higher the risk and potential cost of remediation.” Continuous monitoring can help the organization to reduce the time it takes to detect a failed control.
Recommendation eight, “Maintain Security Awareness” speaks to the need to train the workforce, especially regarding how to respond to social engineering. Security training, both for the staff in general and role-based training for specific teams, is one of the requirements that DataBee for CCM reports on through its dashboards.
Recommendation nine is “Monitoring Compliance of Third-Party Service Providers,” and ten is “Evolve the Compliance Program to Address Changes.” A robust compliance program that is in place throughout the year can be more capable of evolving and adapting to change than an assessment focused program that allows controls to drift out of compliance between assessments. Continuous monitoring is key for combating compliance drift once an assessment has been completed.
After the ten recommendations, the main body of the document concludes with a section about the “Commitment to Maintaining Compliance.” Two of the key actions for maintaining continuous compliance are, “Assigning responsibility for ensuring the achievement of their security goals and holding those with responsibility accountable,” and “Developing tools, techniques, and metrics for tracking the performance and sustainability of security activities.” DataBee for CCM enables both these tasks.
The main theme of the “Best Practices for Maintaining PCI DSS Compliance” is that continuous compliance with PCI DSS that is maintained throughout the year is the goal. Ultimately, this helps improve the overall security posture of the organization. Making the required compliance activities business as usual tasks that are continuous throughout the year can also help with the specific goal of achieving a compliant result for a PCI DSS assessment when it comes due.
How DataBee for CCM fits in
We envisioned and realized DataBee for CCM as a fantastic fit for an evolving compliance program. Using the DataBee dashboards, with their continuously updated information that can be accessible to everyone who needs to see it, help free up time for GRC and other teams to focus on the evolution of the cybersecurity program. Given the rapid change in the cyber-threat landscape, and the frequent changes in security controls and regulatory requirements, turning report creation over to CCM to give time back to your people for higher value work is a win for your organization.
DataBee for CCM helps by providing consistent data to all teams, GRC, executive management, business management, IT, etc., so that everyone is working from the same information. This helps to delegate control compliance, and clearly identify accountable and responsible parties. Furthermore, DataBee for CCM shows executives, GRC, business managers and others content for multiple controls, from many different tools, through a single interface (as opposed to GRC needing to create multiple reports, or business managers and others having to create their own, possibly erroneous, reports). Additional dashboards can be created to report on other controls that are in scope for PCI DSS, such as secure configuration, business continuity, and monitoring the compliance of third-party service providers. Any control for which data is available to create useful dashboard content is a candidate for a DataBee for CCM dashboard.
Read More
Enter the golden age of threat detection with automated detection chaining
During my time as a SOC analyst, triaging and correlating alerts often felt like solving a puzzle without the box or knowing if you had all the pieces.
My days consisted of investigating alerts in an always-growing incident queue. Investigations could start with a single high or critical alert and then hunt through other log sources to piece together what happened. I had to ask myself (and my team) if this alert and that alert had any identifiable relationships or patterns with the ones they investigated that day, even though the alerts looked unrelated by themselves. Most investigations inevitably relied on institutional knowledge to find the pieces of your puzzle, searching by IP for one data source and the computer name in another. Finding the connections between the low and slow attacks in near real-time was a matter of chance and often discovered via threat-hunting efforts, slipping through the cracks of security operations. This isn’t an uncommon story and it's not new either – it’s the same problems faced during the Target 2013 breach and the National Public Data Network 2024 breach.
That’s why we launched automated detection chaining as part of the DataBee for Security Threats solution. Using a patent-pending approach to entity resolution, the security data fabric platform can chain together alerts from disjointed tools that could be potentially tied to an advanced persistent threat, insider threat, or compromised asset. What I like to call a “super alert” is presented in DataBee EntityViews™, which aggregates alerts into a time-series, or chronological, view. Now it’s easier to find attacks that span security tools and the MITRE ATT&CK framework. With our out-of-the-box detection chain, you can automatically create a super alert before the adversary reaches the command-and-control phase.
Break free from vendor-specific detections with Sigma Rules
Once a security tool is fully deployed in the network and environment, it becomes near impossible to change out vendors without significant operational impact. The impact is more than just replacing the existing solution, it's also updating all upstream and downstream integration points, such as custom detection content or log parsers. This leads to potential gaps in coverage due to limitations in the tooling deployed and the tools desired. Standard logging is done to a vendor-agnostic schema, and then an open-source detection framework is applied.
The DataBee Platform automated migrating to the Open Cybersecurity Schema Framework (OCSF), which has become increasingly popular with security professionals and is gaining adoption in some tools. Its vendor-agnostic approach standardizes disparate security logs and data feeds, giving SOC teams the ability to use their security data more effectively. Active detection streams in DataBee apply Sigma formatted rules over security data that is mapped to a DataBee-extended version of OCSF to integrate into the existing security ecosystem with minimal customizations. DataBee handles the translation from the Sigma taxonomy to OCSF to help lower the level of effort needed to adopt and support organizations on their journey to vendor-agnostic security operations. Sigma-formatted detections are imported and managed via GitHub to enable treating detections as code. By breaking free of proprietary formats, teams can more easily use vendor-agnostic Sigma rules to gain security insights from across all their tools, including data stored in security data lakes and warehouses.
The accidental insider threat
Accidental insider threats often begin with a phishing attack containing a malicious link or download that tricks the user. The malware is too new or has morphed to evade your end point detection. Then it spreads to whatever other devices it can authenticate to. Detecting the scope of the lateral movement of the malware is challenging because there is so much noise to search through. With DataBee EntityViews, SOC teams can easily review the historical information connected to the organization’s real-world people and devices, giving them a way to trace the progression of events.
Looking at a user’s profile shows relevant business contexts that can aid the investigation:
Job Title to hint at what is normal behavior
Manager to know who to go to for questions or if action needs to be taken
Owned assets that may be worth investigating further
The Event Timeline shows the various types of OCSF findings associated with the user.
By scrolling through the list of findings, a SOC analyst can quickly identify several potential issues, including malware present within the workstation. Most notable, the MITRE ATT&CK detection chain has triggered. In this instance, we had multiple data sources that alerted on different parts of the ATT&CK chain producing a super alert. The originating events are maintained as evidence and easily accessible to the analyst:
EntityViews allow for bringing the events from devices that the current user owns to help simplify the process of pulling together the whole story. In our example the device is the user’s laptop so it's likely that all of the activity is carried out by the user:
The first thing of note is the unusual number of authentication attempts to devices that seem atypical for a developer such as a finance server. As we continue to scroll through the user’s timeline, reviewing events from a variety of data sources, we finally come across our smoking gun. In this instance, we are able to see the phishing email that user clicked the link on that is our initial point of compromise:
It’s clear the device has malware on it, and the authentication attempts imply that the malware was looking to spread further in the network. To visualize this activity, we can leverage the Related Entities graphical view in the Activity section of EntityViews. SOC analysts can use a graphical representation and animation of the activity to visualize the connections between the compromised user and the organization. The graph displays other users and devices that have appearances in security findings, authentication, and ownership events. In our example, we can see that the user has attempted to authenticate to some atypical devices such as an HR system:
Filtering enables more targeted investigations, like focusing on only the successful authentication attempts:
Visualizations such as this in DataBee enable more accurate, timely and complete investigations. From this view, the SOC analysts can select any entity to see their EntityView with the activity associated with the related users and devices. Rather than pivoting between multiple applications or waiting for data to be reprocessed, they have real-time access to information in an easy to consume format.
Customizing detection chains to achieve organizational objectives
Detection Chains are designed to enable advanced threat modeling in a simple solution. Detection Chains can be created in the DataBee platform leveraging all kinds of events that flow through the security data fabric. DataBee ships with 2 detection chains to get you started:
MITRE ATT&CK Chain: Detect advanced low and slow attacks that span the MITRE ATT&CK chain before reaching Command & Control.
Potential Insider Threat: Detect insider threats who are printing out documents, emailing personal accounts, and messing with files in the file share.
These chains serve as a starting point. The intent is that organizations add and remove chains based on their specific needs. For example, you may want to extend the potential insider threat rule to include more potential email domains or limit file share behavior to accessing files that contain trade secrets or sales information.
Automated detection chains are nearly infinity flexible. By chaining together detections from the different data sources that align to different parts of the attack chain specific to a user or device, DataBee enables building advanced security analytics for hunting the elusive APTs and getting ahead of pesky ransomware attacks.
Building a better way forward with DataBee
Every organization is different, and every SOC team has unique needs. DataBee’s automated detection chaining feature gives SOC analysts a faster way to investigate complex security incidents, enabling them to rapidly and intuitively move through vast quantities of historical data.
If you’re ready to gain the full value of your security data with an enterprise-ready security, risk, and compliance data fabric, request a custom demo to see how DataBe for Security Threats can turn static detections into dynamic insights.
Read More
You've reduced data, so what's next
Organizations often adopt data tiering to reduce the amount of data that they send to their analytics tools, like Security Information and Event Management (SIEM) solutions. By diverting data to an object store or a data lake, organizations are able to manage and lower costs by minimizing the amount of data that their SIEM stores. Although they achieve this tactical objective, the process creates data silos. While people can query the data in isolation, they often fail to glean collective insights across the silos.
Think of the problem like a large building with cameras across its perimeters. The organization can monitor each camera’s viewpoint, but no individual camera has the full picture, as any spy movie will tell you. Similarly, you might have different tools that see different parts of your security picture. Although SIEMs originally intended to tie together all security data into a composite, cloud applications and other modern IT and cybersecurity technology tool stacks generate too much data to make this cost-effective.
As organizations balance saving money with having an incomplete picture, a high-quality data fabric architecture can enable them to build more sustainable security data strategies.
From default to normalized
When you implement a data lake, the diverted data remains in its default format. When you try to paint a composite picture across these tools, you rely on what an individual data set understands or sees, leaving you to pick out individual answers from these siloed datasets.
Instead of asking a question once, you need to ask fragments of the question across different data sets. In some cases, you may have a difficult time ensuring that you have the complete answer.
With a security data fabric, you can normalize the data before landing it in one or more repositories. DataBee® from Comcast Technology Solutions uses extract, transform, and load processes to automatically parse security data, then normalizes it according to our extended Open Cybersecurity Schema Framework (OCSF) so that you can correlate and understand what’s happening in the aggregate picture.
By normalizing the data on its way to your data lake, you optimize compute and storage costs, eliminating some of the constraints arising from other data federation approaches.
Considering your constraints
Federation reduces storage costs, but it introduces limitations that can present challenges for security teams.
Latency
When you move data from one location to another, you introduce various time lags. Some providers will define the times per day or number of times that you can transfer data. For example, if you want data in a specific format, some repositories may only manage this transfer once per day.
Meanwhile, if you want to stream the data into a different format for collection, the reformatting can also create a time lag. A transformation and storage process may take several minutes, which can impact key cybersecurity metrics like mean time to detect (MTTD) or mean time to respond (MTTR).
When you query security datasets to learn what happened over the last hour, a (near) real-time data source will contribute to an accurate picture, while a delayed source may not have yet received data for the same period. As you attempt to correlate the data to create a timeline, you might need to use multiple data sources that all have different lag times. For example, some may be mostly real-time while another sends data five minutes later. If you ask the question at the time an event occurred, the system may not have information about it for another five minutes, creating a visibility gap.
Such gaps can create blind spots as you scale your security analytics strategy. The enterprise security team may be asking hundreds of questions across the data system, and the time delay can create a large gap between what you can see and what happened.
Correlation
Correlating activities from across your disparate IT and security tools is critical. Data gives you facts about an event while correlation enables you to interpret what those facts mean. When you ask fragments of a question across data silos, you have no way to automate the generation of these insights.
For example, a security alert will give you a list of events including hundreds of failed login attempts over three minutes. While you have these facts, you still need to interpret whether they describe malicious actors using stolen credentials or a brute force attack.
To improve detections and enable faster response times, you need to weave together information like:
The IP address(es) involved over the time the event occurred
The user associated with the device(s)
The user’s geographic location
The network access permissions for the user and device(s)
You may be storing this data in different repositories without correlation capabilities. For example, you may have converged all DNS, DHCP, firewall, EDR, and Proxy data in one repository while combining user access and application data in another. To get a complete picture of the event, you need to make at least, although likely more than, two single-silo queries.
While you may have reduced data storage costs, you have also increased the duration and complexity of investigating incidents, which gives malicious actors more time in your systems, making it more difficult to locate them and contain the threat.
Weaving together federated data with DataBee
Weaving together data tells you what and when something happened, enabling insights into activity rather than just a list of records. With a fabric of data, you can interpret it to better understand your environment or gain insights about an incident. With DataBee, you can focus on protecting your business while achieving tactical and strategic objectives.
At the tactical level, DataBee fits into your cost management strategies because it focuses on collecting and processing your data in a streamlined affordable way. It ingests security and IT logs and feeds, including non-traditional telemetry like organizational hierarchy data, from APIs, on-premises log forwarders, AWS S3s, or Azure Blobs then automatically parses and maps the data to the OCSF. You can use one or more repositories, aligning with cost management goals. Simultaneously, data users can access accurate, clean data through the platform to build reliable analytics without worrying about data gaps.
The platform enriches your dataset with business policy context and applies patent-pending entity resolution technology so you can gain insights based on a unified, time-series dataset. This transformation and enrichment process breaks down silos so you can efficiently and effectively correlate data to gain real-time insights, empowering operational managers, security analysts, risk management teams, and audit functions.
Read More
The value of OCSF from the point of view of a data scientist
Data can come in all shapes and sizes. As the “data guy” here at DataBee® (and the “SIEM guy” in a past life), I’ve worked plenty with logs and data feeds in different formats, structures, and sizes delivered using different methods and protocols. From my experience, when data is inconsistent and lacks interoperability, I’m spending most of my time trying to understand the schema from each product vendor and less time on showing value or providing insights that could help other teams.
That’s why I’ve become involved in the Open Cybersecurity Schema Framework (OCSF) community. OCSF is an emerging but highly collaborative schema that aims to standardize security and security-related data to improve consistency, analysis, and collaboration. In this blog, I will explain why I believe OCSF is the best choice for your data lake.
The problem of inconsistency
When consuming enterprise IT and cybersecurity data from disparate sources, most of the concepts are the same (like an IP address or a hostname or a username) but each vendor may use a different schema (like the property names) as well as sometimes different ways to represent that data.
Example: How different vendors represent a username field
Vendor
Raw Schema Representation
Vendor A (Firewall)
user.name
Vendor B (SIEM)
username
Vendor C (Endpoint)
usr_name
Vendor D (Cloud)
identity.user
Even if the same property name is used, sometimes the range of values or classifications might vary.
Example: How different vendors represent “Severity” with different value ranges
Vendor
Raw Schema Representation
Possible Values
Vendor A (Firewall)
severity
low, medium, high
Vendor B (SIEM)
severity
1 (critical), 2 (high), 3 (medium), 4 (low)
Vendor C (Endpoint)
severity
info, warning, critical
Vendor D (Cloud)
severity
0 (emergency) through 7 (debug)
In a non-standardized environment, these variations require custom mappings and transformations before consistent analysis can take place. That’s why data standards can be helpful to govern how data is ingested, stored, and used, maintaining consistency and quality so that it can be used across different systems, applications, and teams.
How can a standard help?
In the context of data modeling, a "standard" is a widely accepted set of rules or structures designed to ensure consistency across systems. The primary purpose of a standard is to achieve normalization—ensuring that data from disparate sources can be consistently analyzed within a unified platform like a security data lake or a security information event management (SIEM) solution. From a cyber security standpoint, this becomes evident in at least a few common scenarios:
Analytics: A standardized schema enables the creation of consistent rules, models, and dashboards, independent of the data source or vendor. For example, a rule to detect failed login attempts can be applied uniformly, regardless of whether the data originates from a firewall, endpoint security tool, or cloud application.
Threat Hunting - Noise Reduction: With normalized fields, filtering out irrelevant data becomes more efficient. For instance, if every log uses a common field for user identity (like username), filtering across multiple log sources becomes much simpler.
Threat Hunting - Understanding the Data: Having a single schema instead of learning multiple vendor-specific schemas reduces cognitive load for analysts, allowing them to focus on analysis rather than data translation.
For log data, several standards exist. Some popular ones are: Common Event Format (CEF), Log Event Extended Format (LEEF), Splunk's Common Information Model (CIM), and Elastic’s Common Schema (ECS). Each has its strengths and limitations depending on the use case and platform.
Why existing schemas like CEF and LEEF fall short
Common Event Format (CEF) and Log Event Extended Format (LEEF) are widely used schemas, but they are often too simplistic for modern data lake and analytics use cases.
Limited Fields: CEF and LEEF offer a limited set of predefined fields, meaning most log data ends up in custom fields, which defeats the purpose of a standardized schema.
Custom Fields Bloat: In practice, most data fields are defined as custom, leading to inconsistencies and a lack of clarity. This results in different interpretations of the same data types, complicating analytics.
Overloaded Fields: Without sufficient granularity, crucial data gets overloaded into generic fields, making it hard to distinguish between different event types.
Example: Overloading a single field like “message” to store multiple types of information (e.g., event description, error code) creates ambiguity and reduces the effectiveness of automated analysis.
The limits of CIM and ECS: vendor-specific constraints
Splunk CIM and Elastic ECS are sophisticated schemas that better address the needs of modern environments, but they are tightly coupled to their respective ecosystems.
Proprietary Optimizations:
CIM: Although widely used within Splunk, CIM is proprietary and lacks an open-source community for contributions to the schema itself. Its design focuses on Splunk’s use cases, which can be limiting in broader environments.
ECS: While open-source, ECS remains heavily influenced by Elastic’s internal needs. For instance, ECS optimizes data types for Elastic’s indexing and querying, like the distinction between keyword and text fields. Such optimizations can be unnecessary or incompatible with non-Elastic platforms.
Field Ambiguity:
CIM uses fields like src and dest, which lack precision compared to more explicit options like source.ip or destination.port. This can lead to confusion and the need for additional context when performing cross-platform analysis.
Vendor-Centric Design:
CIM: The field definitions and categories are tightly aligned with Splunk’s correlation searches, limiting its relevance outside Splunk environments.
ECS: Data types like geo_point are unique to Elastic’s product features and capabilities, making the schema less suitable when integrating with other tools.
How OCSF addresses these challenges
The OCSF was developed by a consortium of industry leaders, including AWS, Splunk, and IBM, with the goal of creating a truly vendor-neutral and comprehensive schema.
Vendor-Neutral and Tool-Agnostic: OCSF is designed to be applicable across all logs, not just security logs. This flexibility allows it to adapt to a wide variety of data sources while maintaining consistency.
Open-Source with Broad Community Support: OCSF is openly governed and welcomes contributions from across the industry. Unlike ECS and CIM, OCSF’s direction is not controlled by a single vendor, ensuring it remains applicable to diverse environments.
Specificity and Granularity: The schema’s granularity aids in filtering and prevents the overloading of concepts. For example, OCSF uses specific fields like identity.username and network.connection.destination_port, providing clarity while avoiding ambiguous terms like src.
Modularity and Extensibility: OCSF’s modular design allows for easy extensions, making it adaptable without compromising specificity. Organizations can extend the schema to suit their unique use cases while remaining compliant with the core model.
In DataBee’s own implementation, we’ve extended OCSF to include custom fields specific to our environment, without sacrificing compatibility or requiring extensive custom mappings. For example, we added the assessment object, which can be used to describe data around 3rd party security assessments or internal audits. This kind of log data doesn’t come from your typical security products but is necessary for the kind of use cases you can achieve with a data lake.
Now that we have some data points about my own experiences with some of the industry’s most common schemas, it’s natural to share a visualization through a comparison matrix of OCSF and two leading schemas.
OCSF Schema Comparison Matrix
Aspect
OCSF
Splunk CIM
Elastic ECS
Openness
Open-source, community and multi-vendor-driven
Proprietary, Splunk-driven
Open-source, but Elastic-driven
Community Engagement
Broad, inclusive community, vendor-neutral
Limited to Splunk community and apps
Strong Elastic community, centralized control
Flexibility of Contribution
Contributable, modular, actively seeks community input
No direct community contributions
Contributable, but Elastic makes final decisions
Adoption Rate
Early but growing rapidly across multiple vendors
High within Splunk ecosystem
High within Elastic ecosystem
Vendor Ecosystem
Broad support, designed for multi-vendor use
Splunk-centric, limited outside of Splunk
Elastic-centric, some third-party integrations
Granularity and Adaptability
Structured and specific but modular; balances adaptability with detailed extensibility
Moderately structured with more generic fields; offers broad compatibility but less precision
Highly granular and specific with tightly defined fields; limited flexibility outside Elastic environments
Best For
Flexible, vendor-neutral environments needing both detail and adaptability
Broad compatibility in Splunk-centric environments
Consistent, detailed analysis within Elastic environments
The impact of OCSF at DataBee
In working with OCSF, I have been particularly impressed with the combination of how detailed the schema is and how extensible it is. We can leverage its modular nature to apply it to a variety of use cases to fit our customers' needs, while re-using most of the schema and its concepts. OCSF’s ability to standardize and enrich data from multiple sources has streamlined our analytics, making it easier to track threats across different platforms and ultimately helping us deliver more value to our customers. This level of consistency and collaboration is something that no other schema has provided, and it’s why OCSF has been so impactful in my role as a data analyst.
If we have ideas for the schema that might be usable for others, the OCSF community is receptive to contributions. The community is already brimming with top talent in the SIEM and security data field and is there to help guide us in our mapping and schema extension decisions. The community-driven approach means that I’m not working in isolation; I have access to a wealth of knowledge and support, and I can contribute back to a growing standard that is designed to evolve with the industry.
Within DataBee as a product, OCSF enables us to build powerful correlation logic which we use to enrich the data we collect. For example, we know we can track the activities of a device regardless of whether the event came from a firewall or from an endpoint agent, because the hostname will always be device.name.
Whenever our customers have any questions about how our schema works, the self-documenting schema is always available at ocsf.databee.buzz (which includes our own extensions). This helps to enable as many users as possible to gain security and compliance insights.
Conclusion
As organizations continue to rely on increasingly diverse and complex data sources, the need for a standardized schema becomes paramount. While CEF, LEEF, CIM, and ECS have served important roles, their limitations—whether in scope, flexibility, or vendor-neutrality—make them less ideal for a comprehensive data lake strategy.
For me as a Principal Cybersecurity Data Analyst, OCSF has been transformative and represents the next evolution in standardization. With its vendor-agnostic, community-driven approach, OCSF offers the precision needed for detailed analysis while remaining flexible enough to accommodate the diverse and ever-evolving landscape of log data.
Read More
Political Advertising: the race to every screen moves fast
If there’s one thing that’s self-evident in this U.S. presidential election year, it’s this: political advertising moves fast.
"For agencies representing the vast number of campaigns across state and national campaigns, it’s not just about the sheer volume, but also about the need to respond quickly,” explains Justin Morgan, Sr. Director of Product Management for the CTS AdFusion team. “Just think about how the news cycle – in just the last few days alone – has necessitated rapid changes in messaging that needs to be handled accurately across every outreach effort.
According to GroupM, political advertising for 2024 is expected to surpass $16 billion – over four percent of the entire ad revenue for the year. As consumers we’re certainly aware that campaign messaging is constant, and constantly changing, but what does that mean behind the scenes from a technology standpoint?
“Political advertisers have to maintain a laser-focus on messaging and results,” explains Morgan. “This makes accuracy and agility even more crucial, so that agencies can make changes fast and trust that every spot is not just placed accurately, but also represents their candidate(s) in the best possible quality, no matter where or how that ad gets seen.”
For the AdFusion team, it’s important to make ad management simple and streamlined so our partners can focus on the most important goal, winning the race. That’s why we focus on three areas to help our clients get better, faster, and smarter:
Better: Precisely target the right voters with the right message by leveraging program-level automated traffic and delivery
Faster: Quickly shift campaign strategy as the race evolves by delivering revised ads and traffic in minutes
Smarter: Drive compliance with one-hour invoicing and in-platform payments.
Learn more about how AdFusion improves the technology foundation for driving political campaigns here.
Read More
The challenges of a converged security program
It’s commonplace these days to assume we can learn everything about someone from their digital activity – after all, people share so much on social media and over digital chats. However, advanced threats are more careful on digital. To catch advanced threats, therefore, combining insights from their actual activities in the world on a day-to-day basis with their digital communications and activity can provide a better sense if there’s an immediate and significant threat that needs to be addressed.
Let’s play out this insider threat scenario. While this scenario is in the financial services sector, with quick imagination, a security analyst could see applicability to other sectors. An investment banking analyst, Sarah, badges into a satellite office on a Saturday at 7 pm. Next, she logs onto a workstation, and prints 200 pages of materials. These activities alone, could look innocuous. But taken together, could there be something more going on?
As it turns out, Sarah tendered her resignation the prior Friday with 14 days notice. She leaves that Saturday night with two paper bags of confidential company printouts in tow to take to her next employer – a competing investment bank - to give her an edge.
A complete picture of her activity can be gleaned with logs from a few data sources:
HR data showing her status as pending termination, from a system like Workday or SAP
Badge reader logs
Sign in logs
Print logs
Video camera logs, from the entry and exit way of the building
While seemingly simple, piecing all this information together and taking steps to stop the employee’s actions or even recover the stolen materials is non-trivial. Today, companies are asking themselves, what type of technology is required to know that her behavior was immediately suspicious? And what type of security program can establish the objectives and parameters for quickly catching this type of insider threat?
What is a converged security program?
In the above scenario, sign-in logs and print logs alone aren’t necessarily suspicious. The suspicion level materially increases when you consider the combined context of her employment status with the choice of day and time to badge into the office. As such, converged security dataset analysis brings together physical security data points, such as logs from cameras or badge readers in the above example, and digital insights from activity on computers, computer systems or the internet. If these insights are normalized into the same dataset with clear consistency across user and device activity, they can be analyzed by physical security or cybersecurity analysts for faster threat detection. Furthermore, such collaboration can give way to physical and cybersecurity practitioners establishing a converged set of policies and procedures for incident response and recovery.
In his book, Roland Cloutier describes three important attributes of a converged security function:
One Table: everyone sitting together to discuss issues and create a sense of aligned missions, policies, and procedures for issue detection and response.
Interconnected Issue Problem Solving: identifying the problem as a shared mission and connecting resources in a way that resolves problems faster.
Link Analysis: bringing together data points about an issue or problem and correlating them to gain insights from data analytics.
Challenges of bringing together physical and information security
In today’s environment, the challenges of intertwining physical and digital security insights are substantial. Large international enterprises have campuses scattered across the world and a combination of in-office and remote workers. They may face challenges when employee data is fragmented across different physical and digital systems. Remote workers often don’t have physical security log information associated with their daily activity because they work from the confines of their homes, out of reach of corporate physical monitoring.
The modern workforce model further complicates managing physical and digital security as organizations contend with the:
Rise of remote work
Demise of the corporate network
Usage of personal mobile devices at work
Constant travel of business executives
A worker can no longer be tracked by movement in the building and on the corporate network. Instead, the person’s physical location and network connections change throughout the day. Beyond the technical challenges, organizations face hierarchical structure and human element challenges.
Many companies separate physical security from cybersecurity. One reason for this is that a different skillset may be required to stop the threats. Yet, there is value in the two security leaders developing an operating model for collaboration centers on a global data strategy with consistent and complete insights from the physical security and cybersecurity tools.
Consider a model to do so that revolves around 3 principles:
Common data aggregation and analysis across physical and cybersecurity toolsets
Resource alignment for problem solving and response in the physical security org and the cybersecurity group.
A common set of metrics for accountability across the converged security discipline
Diverse, disconnected tools
This is the problem cybersecurity faces, but this time on a wider scale. Each executive purchases tools for monitoring within their purview. They get data.
However, they either fail to gain insights from it or any insights they do achieve are limited to the problem the technology solves.
Access is a good example:
Identity and Access Management (IAM): sets controls that limit and manage how people interact with digital resources.
Employee badges: set control for what facilities and parts of facilities people can physically access.
Returning to the insider threat that Sarah Smith poses: the CISO’s information security organization has visibility into sign in and print logs but would have to collaborate with the CSO’s physical security group for badge logs. This process takes time and, depending on organizational politics, potentially requires convincing.
The siloed technologies could potentially create a security gap when the data remains uncorrelated:
The sign in and print logs alone may not sufficiently draw attention to Sarah’s activities in the CISO’s organization.
Badge in logs in the CSO’s organization may not draw an alert.
HR data, such as employment/termination status, may not be correlated in either the CISO’s or CSO’s available analytical datasets.
Without weaving the various data sources together into one story, Sarah’s behavior has a high risk of going completely undetected – by anyone.
In a converged program, IAM access and badge access would be correlated to improve visibility. In a converged program with high security data maturity, the datasets would provide a more complete picture with insights that correlate HR termination status, typical employee location, and more business context.
Resource constraints
The challenge of resource alignment often begins by analyzing constraints. Both physical and digital security costs money. Many companies view these functions as separate budgets, requiring separate sets of technologies, leadership, and resources.
Converged security contemplates synergies where possible overlaps can potentially reduce costs. For example:
Human Resources data: identifying all workforce members who should have physical and digital access.
IT system access: determining user access where HR is the source underlying Active Directory or IGA birthright provisioning and automatic access termination.
Building access: badges provisioning and terminating physical access according to HR status
The HR system, sign on system, and badge-in system each serve a separate recordation purpose, which can then provide monitoring functionality. However, by keeping insights from daily system usage separate, the data storage and analysis can grow redundant. As Cloutier notes, “siloed operations tend to drive confusion, frustration, and duplicative work streams that waste valuable resources and increase the load on any given functional area.” (24)
Instead, imagine if diverse recordation systems output data to a single location that parsed, correlated, and enriched data to create user profiles and user timelines so cross-functional teams with an interdisciplinary understanding of threat vectors could analyze it. In such an organization, this solution could
Reduce redundant storage.
Eliminate manual effort in correlating data sources from different systems.
Save analysts time by having all the data already in one spot (no need for gathering in the wake of an incident.
Allow for more rapid detection and response.
Metrics and accountability
Keeping physical security separate from cybersecurity can create the risk of disaggregated metrics and a lack of accountability. People must “compare notes” before making decisions, and the data may have discrepancies because everyone uses different technologies intended to measure different outcomes.
These data, tool, and operations silos can create an intricate, interconnected set of overlapping “blurred lines” across:
Personal/Professional technologies
Physical/Digital security functions
In the wake of a threat, the last thing people want to do is increase the time making decisions or argue over accountability, which can quickly spiral into conversations of blame.
Imagine, instead, a world in which the enterprise can make security a trackable metric. Being able to track an end goal – such as security, whether physical or digital – makes it easier to
Hold people accountable.
Make clear decisions.
Take appropriate action.
A trackable metric is only as good as the data that can back it up. Converged security centers around the concept of a global security data strategy that provides an open architecture for analyses that answer different questions while using a commonly accessed, unified data set that diverse security professionals accept as complete, valid, and the closet thing they can get to the “source of truth”.
Weaving together data for converged security with DataBee®
DataBee by Comcast Technology Solutions fuses together physical and digital security data into a data fabric architecture, then enriches it with additional business information, including:
Business policy context
Organizational hierarchy
Employment status
Authentication and endpoint activity logs
Physical bad and entrance logs
By weaving this data together, organizations achieve insights using a transformed dataset mapped to the DataBee-extended Open Cybersecurity Framework Scheme (OCSF). DataBee EntityViewsTM uses a patent-pending entity resolution software that automatically unifies disparate entity pieces across multiple sources of information. This enables many analytical use cases at speed and low cost. One poignant use case includes insider threat monitoring with a comprehensive timeline of user and devices activity, inside a building and when connected to networks.
The DataBee security data fabric architecture solves the Sarah problem, by weaving together in on timeline:
Her HID badge record from that Saturday’s office visit
The past several months of HR records from Workday showing her termination status.
Her Microsoft user sign-in to a workstation in the office
The HP print logs associated with her network ID and a time stamp.
DataBee empowers all security data users within the organization, including compliance, security, operations, and management. By creating a reliable, accurate dataset, people have fast, data-driven insights to answer questions quickly.
Read More
Vulnerabilities and misconfigurations: the CMDB's invasive species
“Knowledge is power.” Whether you attribute this to Sir Francis Bacon or Thomas Jefferson, you’ve probably heard it before. In the context of IT and security, knowing your assets, who owns them, and how they’re connected within your environment are fundamental first steps in understanding your environment and the battle against adversaries. You can’t place security controls around an asset if you don’t know it exists. You can’t effectively remediate vulnerabilities to an asset without insight into who owns it or how it affects your business.
Maintaining an up-to-date configuration management database (CMDB) is critical to these processes. However manually maintaining the CMDB is unrealistic and error-prone for the thousands of assets across the modern enterprise including cloud technologies, complex networks, and devices distributed across in-office and remote workforce users complicate this process. To add excitement to these challenges, the asset landscape is everchanging for entities like cloud assets, containers virtual machines, which can be ephemeral and become lost in the noise generated by the organization's hundreds of security tools. Additionally, most automation fails to link business users to the assets, and many asset tools struggle to prioritize assets correlating to security events, meaning that companies can easily lose visibility and lack the ability to prioritize asset risk.
Most asset management, IT service management (ITSM), and CMDBs focus on collecting data from the organization’s IT infrastructure. They ingest terabytes of data daily, yet this data remains siloed, preventing operations, security, and compliance teams from collaborating effectively.
With a security data fabric, organizations can break down data silos to create trustworthy, more accurate analytics that provide them with contextual and connected security insights.
The ever-expanding CMDB problem
The enterprise IT environment is a complex ecosystem consisting of on-premises and cloud-based technologies. Vulnerabilities and misconfigurations are an invasive species of the technology world.
In nature, a healthy ecosystem requires a delicate balance of plants and organisms who all support one another. An invasive species that disrupts this balance can destroy crops, contaminate food and water, spread disease, or hunt native species. Without controlling the spread of invasive species, the natural ecosystem is at risk of extinction.
Similarly, the rapid adoption of cloud technologies and remote work models expands the organization’s attack surface by introducing difficult-to-manage vulnerabilities and misconfigurations. Traditional CMDBs and their associated tools often fail to provide the necessary insights for mitigating risk, remediating issues, and maintaining compliance with internal controls.
In the average IT environment, the enterprise may combine any of the following tools:
IT Asset Management: identify technology assets, including physical devices and ephemeral assets like virtual machines, containers, or cell phones
ITSM: manage and track IT service delivery activities, like deployments, builds, and updates
Endpoint Management: manage and track patches, operation systems (OS) updates, and third-party installed software
Vulnerability scanner: scan networks to identify security risks embedded in software, firmware, and hardware
CMDB: store information about devices and software, including manufacturer, version, and current settings and configurations
Software-as-a-Service (SaaS) configuration management: monitor and document current SaaS settings and configurations
Meanwhile, various people throughout the organization need access to the information that these tools provide, including the following teams:
IT operations
Vulnerability management
Security
Compliance
As the IT environment expands and the organization collects more security data, the delicate balance between existing tools and people who need data becomes disrupted by newly identified vulnerabilities and cloud configuration drift.
Automatically updating the CMDB with enriched data
In nature, limiting an invasive species’ spread typically means implementing protective strategies for the environment that contain and control the non-native plant or organism. Monitoring, rapid response, public education, and detection and control measures are all ways that environmentalists work to protect the ecosystem.
In the IT ecosystem, organizations use similar activities to mitigate risks and threats arising from vulnerabilities and misconfigurations. However, the time-consuming manual tasks are error-prone and not cost-efficient.
Connect data and technologies
A security data fabric ingests data from security and IT tools, automating and normalizing the inputs so that the organization can gain correlated insights from across a typically disconnected infrastructure. With a vendor agnostic security data platform connecting data across the environment, organizations can break down silos created by various schemas and improve data’s integrity.
Improve data quality and reduce storage costs
By applying extract, transform, and load (ETL) pipelines to the data, the security data fabric enables organizations to store and load raw and optimized data. Flattening the data can reduce storage costs since companies can land it in their chosen data repository, like a data lake or warehouse. Further, the data transformation process identifies and can fix issues that lead to inaccurate analytics, like:
Data errors
Anomalies
Inconsistencies
Enrich CMDB with business information
Connecting asset data to real-world users and devices enables organizations to assign responsibility for configuration management. Organizations need to correlate their CMDB data with asset owners so that they can assign security issue remediation activities to the right people. By correlating business information, like organizational hierarchy data, with device, vulnerability scan, and ITSM data, organizations can streamline remediation processes and improve metrics.
Gain reliable insights with accurate analytics
Configuration management is a critical part of an organization’s compliance posture. Most security and data protection laws and frameworks incorporate configuration change management and security path updating. With clean data, organizations can build analytics models to help improve their compliance outcomes. To enhance corporate governance, organizations use their business intelligence tools, like Power BI or Tableau, to create visualizations so that senior leadership teams and directors can make data-driven decisions.
Maintain Your CMDB’s delicate ecosystem with DataBee®
DataBee from Comcast Technology Solutions is a security data fabric that ingests data from traditional sources and feeds then supplements that with business logic and people information. The security, risk, and compliance platform engages early and often throughout the data pipeline leveraging metadata to adaptively collect, parse, correlate, and transform security data to align it with the vendor-agnostic DataBee-extended Open Cybersecurity Framework Schema (OCSF).
Using Comcast’s patent-pending entity resolution technology, DataBee suggests potential asset owners by connecting asset data to real-world users or devices so organizations can assign security issue remediation actions to the right people. With a 360 view of assets and devices, vulnerability and remediation management teams can identify critical and low-priority entities to help improve mean time to detach (MTTD) and mean time to respond (MTTR) metrics. The User and Device tables supplement the organization’s existing CMDB and other tools, so everyone who needs answers has them right at their fingertips.
Read More
Continuous controls monitoring (CCM): Your secret weapon to navigating DORA
Financial institutions are a critical backbone of the local and geographical – and world – economy. As such the financial services industry is highly regulated and often faces new compliance mandates and requirements. Threat actors target the industry because it manages and processes valuable customer personally identifiable information (PII) such as account, transaction, and behavioural data.
Maintaining consistent operations is critical, especially in an interconnected, global economy. To standardise processes for achieving operational resilience, the European Parliament passed the Digital Operational Resilience Act (DORA).
What is DORA?
DORA is a regulation passed by the European Parliament in December of 2022. DORA applies to digital operational resilience for the financial sector. DORA entered into force in January of 2023, and it applies as of January 17, 2025.
Two sets of rules, or policy products, provide the regulatory and implementation details of DORA. The first set of rules under DORA were published on January 17, 2024, and consist of four Regulatory Technical Standards (RTS) and one Implementing Technical Standard (ITS). It is worth noting that not all the RTSes contain controls that financial entities need to implement. For example, JC 2023 83, the “Final Report on draft RTS on classification of major incidents and significant cyber threats,” provides criteria for entities to determine if a cybersecurity incident would be classified as a “major” incident according to DORA. The public consultation on the second batch of policy products is completed, and the feedback is being reviewed prior to publishing the final versions of the policies. Based on the feedback received from the public, the finalised documents will be submitted to the European Commission July 17, 2024.
What is Continuous Controls Monitoring (CCM), and how can it help?
DORA has a wide-ranging set of articles, many of which require the implementation and monitoring of controls. Organisations can use a continuous controls monitoring (CCM) solution, which is an emerging governance, risk and compliance technology, to automate controls monitoring and reduce audit cost and stress. When choosing a CCM solution for DORA, consider a data fabric platform that brings together data from enterprise IT and cybersecurity tools and enriches it with business data to help organisations apply data analytics for measuring and reporting on the effectiveness of internal controls and conformance to laws and regulations. The following are examples of how CCM could be used to support DORA compliance.
Continuous Monitoring:
Article 9 of DORA, Protection and prevention, explains that to adequately protect Information and Communication Technologies (ICT) systems and organise response measures, “financial entities shall continuously monitor and control the security and functioning of ICT systems.” Similarly, Article 16, Simplified ICT risk management framework, requires entities to “continuously monitor the security and functioning of all ICT systems.”
Additionally, Article 6 requires financial entities to “minimise the impact of ICT risk by deploying appropriate strategies, policies, procedures, ICT protocols and tools.” It goes on to require setting clear objectives for information security that include Key Performance Indicators (KPIs), and to implement preventative and detective controls. Reporting on the implementation of multiple controls, combining compliance data with organizational hierarchy, and reporting on KPIs are all tasks that CCM excels at. When choosing a CCM solution for DORA, consider one that supports uninterrupted oversight of multiple controls by automating the ingestion of data, formatting it, and then presenting it to users through the business intelligence solution of their choice.
The Articles of JC 2023 86 the “Final report on draft RTS on ICT Risk Management Framework and on simplified ICT Risk Management Framework” contain many ICT cybersecurity requirements that are a natural fit to be measured by CCM. Here are some examples of these controls:
Asset management: entities must keep records of a set of attributes for their assets, such as a unique identifier, the owner, business functions or services supported by the ICT asset, whether the asset is or might be exposed to external networks, including the internet, etc.
Cryptographic key management: entities need to keep a register of digital certificates and the devices that store them and must ensure that certificates are renewed prior to their expiration.
Data and system security: entities must select secure configuration baselines for their ICT assets as well as regularly verifying that the baselines are in place.
A CCM solution that is built on a platform that correlates technical and business data supports security, risk, and compliance teams for building accurate, reliable reports to help measure compliance. It provides consistent visibility into control status across multiple teams throughout the organisation. This reduces the need for reporting controls in spreadsheets and in multiple dashboards, helping business leaders make more immediate and data-driven governance decisions about their business.
Executive Oversight:
Financial entities are required to have internal governance to ensure the effective management of ICT risk (Article 5, Governance and organisation). CCM solutions that integrate with business intelligence solutions, like Power BI and Tableau, to build executive dashboards and data visualizations can provide an overview of multiple controls through a single display.
Roles and Responsibilities:
DORA Article 5(2)) requires management to “set clear roles and responsibilities for all ICT-related functions and establish appropriate governance arrangements to ensure effective and timely communication, cooperation and coordination among those functions.” A CCM solution that combines organisational hierarchy with control compliance data makes roles and responsibilities explicit, which helps improve accountability across risk, management, and operations teams. That is, a manager using CCM does not have to guess which assets or people that belong to their organisation are compliant with corporate policy, or regulations. Instead, they can easily view their compliance status.
CCM dashboards and detail views provide the specifics about any non-compliant assets such as the asset name, and details of the controls for which the asset is non-compliant. Similarly, CCM can communicate details about compliance for a manager’s staff, such as if mandatory training has been completed by its due date, or who has failed phishing simulation tests.
Coordination of multiple teams:
As the FS-ISAC DORA Implementation Guidance notes, “DORA introduces increased complexity and requires close cross-team collaboration. Many DORA requirements cut across teams and functions, such as resilience/business continuity, cybersecurity, risk management, third-party and supply chain management, threat and vulnerability management, incident management and reporting, resilience and security testing, scenario exercising, and regulatory compliance. As a result, analysing compliance and checking for gaps is challenging, particularly in large firms.”
CCM helps with cross-team collaboration by providing a common, accurate, and consistent view of compliance data, which can reduce overall compliance costs. That is, GRC teams are not tasked with creating and distributing multiple reports for various teams and trying to keep the reports consistent, and timely. Or business teams are no longer responsible for pulling their own reports, overcoming issues with inconsistent or inaccurate reporting from inexperience with the product creating the report, reports being run with different parameters or on different dates, or other differences or errors. CCM helps resolve this issue because it makes the same content, using consistent source data from the same point in time, available to all users.
5 ways how DataBee can help you navigate DORA
The requirements for DORA are organised under these five pillars. How does DataBee help enterprises to comply with each of the five?
1. Information and Communication Technologies (ICT) risk management requirements (which include ICT Asset Management, Vulnerability and patch management, etc.)
DataBee’s Continuous Controls Monitoring (CCM) delivers continuous risk scores and actionable risk mitigation, helping financial entities to prioritize remediation for at-risk resources.
2. ICT-related incident reporting
DORA identifies what qualifies as a “major incident” and must therefore be reported to competent authorities. This is interesting compared to cybersecurity incident reporting requirements from the U.S. Securities and Exchange Commission (SEC) which are based on materiality, but do not provide details about what is or is not material. DORA includes criteria to determine if the incident is “major.” Some examples are if more than 10% of all clients or more than 100,000 clients use the affected service, or if greater than 10% of the daily average number of transactions are affected. Additionally, if a major incident does need to be reported, DORA includes specific information that financial entities must provide. These include data fields such as date and time the incident was detected, the number of clients affected, and the duration of the incident. A security data fabric such as DataBee can help to provide many of the measurable data points needed for the incident report.
3. ICT third-party risk
DataBee for CCM provides dashboards to report on the controls used for the management and oversight of third-party service providers. These controls are implemented to manage and mitigate risk due to the use of third parties.
4. Digital operational resilience testing (Examples include, vulnerability assessments, open-source analyses, network security assessments, physical security reviews, source code reviews where feasible, end-to-end testing or penetration testing.)
DORA emphasizes digital operational resilience testing. DataBee supports this by aggregating and simplifying the reporting for control testing and validation. DataBee’s CCM dashboards provide reporting for multiple controls using an interface that is easily understood, and which business managers can use to readily assess their unit’s compliance with controls required by DORA.
5. Information sharing
As with incident reporting, the data fabric implemented by DataBee supports information sharing. DataBee can economically store logs and other contextual data for an extended period. DataBee makes this data searchable providing the ability to locate, and at the organization’s discretion, exchange cyber threat information and intelligence with other financial entities.
Read More
Channels are changing: Cloud-based origination and FAST
Linear television isn’t dead, but it certainly isn’t the same as it was even five years ago. As content delivery and consumption shift toward digital platforms, content owners and distributors find themselves navigating a rapidly evolving landscape. The rise of streaming services has further intensified competition for viewership, challenging traditional linear TV models to adapt or risk becoming obsolete.
In response, content owners and operators are rethinking their distribution strategies and exploring innovative approaches to engage audiences and adapt to end-user preferences. However, thriving in this new ecosystem demands more than just reactivity — it requires innovation, agility, and a keen understanding of emerging technologies and industry trends. Read on to learn the role that Managed Channel Origination (MCO) plays in the evolution of content delivery or watch experts Gregg Brown, Greg Forget, David Travis, and Jon Cohen dive deeper in the full webinar here.
Global reach in the digital media landscape
MCO, which leverages cloud-based technologies and advanced workflows to streamline and scale content delivery, is at the forefront of the content delivery revolution. With cloud infrastructure, service providers can seamlessly scale up or down based on demand, optimizing resource allocation and adapting to evolving consumer preferences and distribution platforms.
Thanks to the cloud, barriers and complexities that once hindered global distribution are either significantly reduced — or no longer exist at all — allowing businesses to operate at a global scale and deliver content across multiple regions and availability zones. This newfound flexibility enables providers to deploy channels easily in any region, eliminating the need for physical infrastructure and simplifying the distribution process.
Beyond flexibility, cloud solutions can also lower the total cost of ownership, accelerate deployment speed and time to market, and lower security and support costs. When this is considered with the greater levels of flexibility and scalability offered by cloud-based origination, it’s clear that with the right MCO solution, content providers can tap into emerging markets, capitalize on growing viewership trends, and drive revenue growth on a global scale.
Unlocking the value of live content
Live content, such as sporting events and concerts, remains a cornerstone of linear television, engaging audiences with unique viewing experiences and a sense of immediacy that other forms of media just can’t quite replicate. Yet, operators must strike a balance between the expense of acquiring live content, effectively monetizing it through linear channels, and delivering it to global markets. Cloud-based origination makes it easier to achieve this crucial balance and highlights the importance of collaboration and strategic partnerships in scalable, cost-effective delivery.
Within the context of MCO, live content translates to greater opportunities for regionalization and niche content catering to diverse viewer preferences. As the industry continues to evolve, leveraging technological advancements and embracing innovation will be essential for content owners to navigate the complexities of global distribution, advertising, and monetization while delivering compelling content experiences to audiences around the world.
FAST forward: Transforming the perception of ad-supported channels
FAST channels represent another way that linear is undergoing a transformation. These free, ad-supported channels offer content to viewers over the internet and cater to the growing demand for alternative viewing options. Yet content owners must carefully consider the role of FAST channels within their overall offerings.
The current state of the FAST market feels like halftime in a major sporting event, prompting stakeholders to reassess their strategies and make necessary adjustments to adapt to shifting viewer preferences and navigate the complexities of content distribution. By leveraging advanced technologies, cloud-based channel origination can act as a catalyst for the creation, curation, and monetization of FAST channels.
The simplicity of onboarding onto FAST platforms with pre-enabled implementations has made entry into the market more accessible for content owners. This accessibility, coupled with the growing consumer demand for high-quality content, underscores the importance of FAST channels meeting broadcast-quality standards to remain competitive and reliable.
There's a noticeable transition in the FAST channels landscape, where channels previously seen as lower value are now transforming to increase their sophistication and content quality. This shift reflects a growing demand for higher-quality FAST channels, signaling a departure from the perception of FAST channels as merely supporting acts to traditional linear TV.
With many content providers expanding their reach beyond regionalized playouts to cater to international markets, a cloud-based approach can help navigate the opportunities and challenges associated with expanding FAST channels into emerging markets, particularly concerning advertising revenue and economic viability in new territories.
Advertising within cloud-based origination is still evolving. Currently, distributors wield substantial data and control, managing ad insertion, networks, and sales. At the same time, FAST channels represent the next evolution of monetization, particularly with technologies like server-side ad insertion (SSAI) and AI making it easier to leverage data to create more personalized experiences. As the flow of open data increases, and the use of AI and ML expands, it will only serve to foster greater efficiency and innovation and ensure that ads reach the right global users at the right time.
Crafting a customized channel origination strategy
While channel origination offers content owners an innovative and cost-effective way to launch and manage linear channels more efficiently, it can’t be approached as a one-size-fits-all solution. For instance, does the business want to add a new channel or expand distribution? Is the content file-based, or does it include live events? If it's ad-supported, are there graphic requirements, or is live data needed?
To address these variabilities, content owners must create a scalable strategy that is flexible enough to adapt to regional requirements. By leveraging cloud-based solutions, content owners can effectively manage fluctuations in demand and adapt their channel offerings to meet the needs of global audiences.
In a marketplace where entertainment options abound, cloud-based origination helps businesses make the most impact, not just by increasing the number of channels, but also by ensuring that they have the right offerings in the right places. By adopting a holistic approach to channel origination that incorporates internet-based delivery services, content owners can maximize their reach, enhance viewer engagement, and position themselves for success in an ever-changing landscape.
Learn how Comcast Technology Solutions’ Managed Channel Origination can help your business expand your offerings or break into new markets in Europe, the Middle East, Africa, and beyond.