April 26, 2026
Self-Learning Loyalty: Adaptive AI Architecture, Causal Incrementality, and the Data Boundary Architect Role in Enterprise Customer Intelligence Systems
A working paper introducing Self-Learning Loyalty as a framework for AI-enabled customer intelligence systems — and the Data Boundary Architect, a new practitioner role required to govern them at enterprise scale.
By Jim Edgett
Working Paper · April 2026
Status: Working paper. Not yet peer-reviewed. Posted to SSRN for priority establishment.
AI disclosure: AI-assisted tools were used for literature research support and editorial refinement. All conceptual frameworks, analytical conclusions, and intellectual contributions are the original work of the author.
Abstract
The loyalty industry stands at a technological and strategic inflection point. For three decades, loyalty programs have evolved from simple points-accumulation systems to sophisticated programmatic rule engines to trained machine learning models — each generation more capable than the last, yet each sharing a fundamental limitation: the system does what a human designed it to do. This paper introduces Self-Learning Loyalty as a formal framework for the next generation of loyalty intelligence: AI-enabled systems that observe customer behavioral events in real time, continuously update their predictive models without human retraining cycles, and generate measurable incremental outcomes at the individual level.[1][2] The paper defines the architectural components of a Self-Learning Loyalty system — including the event streaming backbone, unified identity graph, portfolio-level decisioning engine, activation and orchestration layer, and closed-loop measurement system — and distinguishes this architecture from prior generations of loyalty technology.[3][4] It then identifies the most consequential unsolved problem in deploying self-learning systems at enterprise scale: the cold start problem, wherein a system with no prior behavioral history lacks a principled basis for its initial predictions. A solution is proposed in the form of expert-initialized model weights derived from structured loyalty domain expertise, alongside a new practitioner role required to govern this transition: the Data Boundary Architect. This role represents a reinvention of the loyalty strategist's function, from program designer to governance architect and from tier economist to signal boundary specialist. Finally, the paper argues that Self-Learning Loyalty changes not only program execution but also the financial use of loyalty data, because causal incrementality estimates can improve revenue forecasting, funded-offer pricing, and forward-looking business planning.[1][2][5]
Keywords: self-learning systems, loyalty program design, AI governance, data boundary architecture, customer intelligence, causal inference, incrementality, predictable revenue, agentic AI
1. Introduction
The economic case for customer loyalty has been established with unusual clarity relative to most areas of marketing strategy. Research consistently finds that retained and better-recognized customers generate materially superior financial outcomes for the firms that serve them well.[6][1] In practice, however, many loyalty programs remain structurally disconnected from the financial outcomes they are theoretically designed to produce, because they report engagement activity more readily than they demonstrate causal business impact.[1][2]
This disconnect is not primarily a data scarcity problem. Enterprise loyalty programs typically hold large behavioral datasets including transaction histories, offer redemption records, channel interaction logs, and tier migration events. The more fundamental problem is architectural and methodological: the systems that manage these programs were designed to execute loyalty programs defined by humans, not to learn which interventions produce superior outcomes and then continuously improve on that basis.[3][7]
This paper argues that the transition from designed programs to self-learning systems represents the most significant structural change in the loyalty industry since the rise of database-driven CRM. It is not merely an improvement in personalization capability. It is a categorical shift in what a loyalty system is for — from executing what a strategist designed to learning what a customer is likely to do next, what intervention is likely to change that outcome, and what financial value that intervention is expected to create.[2][3]
The paper makes three contributions. First, it introduces Self-Learning Loyalty as a formal conceptual framework and distinguishes it from prior generations of loyalty intelligence through a structured generational taxonomy. Second, it identifies the governance problem that sits at the center of self-learning deployment and defines the Data Boundary Architect as the practitioner role required to solve it. Third, it extends the loyalty value conversation from personalization and retention into causal incrementality and forecastability, arguing that self-learning systems can make loyalty data more useful to Finance, category management, and executive planning than current program architectures allow.[1][2][5]
The sections that follow first distinguish three generations of loyalty intelligence, then define the architectural pattern of Self-Learning Loyalty, then address the cold start problem and the Data Boundary Framework, and finally consider the implications for consulting firms, agencies, in-house teams, and financial planning.
2. Three Generations of Loyalty Intelligence
Before defining what Self-Learning Loyalty is, it is necessary to be precise about what it is not. The loyalty industry frequently uses the term "AI-enabled" to describe capabilities ranging from simple segmentation models to next-best-action engines. That language obscures a meaningful distinction between three categorically different approaches to customer intelligence.[7][8]
2.1 Generation 1: Programmatic Logic
The first generation of loyalty intelligence is deterministic rule-based logic. A member who reaches 500 points advances to the Gold tier. A member who has not visited in 30 days receives a win-back communication. A member who redeems an offer within 72 hours is classified as high engagement. In each case, the outcome was specified in advance by a human designer. The system executes those specifications reliably and at scale, but it cannot discover a rule that no human thought to write.
Programmatic systems are auditable, predictable, and operationally straightforward. They remain the dominant architecture in many loyalty programs operating today, including programs that describe themselves as AI-enabled. Their limitation is fundamental: the system's intelligence is bounded by the designer's foresight. When customer behavior shifts in ways the original rules did not anticipate, the program fails to adapt until a human identifies the gap and rewrites the rule.[7][8]
2.2 Generation 2: Trained Machine Learning
The second generation introduces statistical models trained on historical behavioral data. A data scientist defines a prediction target — such as churn probability, purchase propensity, offer responsiveness, or lifetime value — and trains a model on historical records. The model is then deployed to score current members against those predictions, enabling more precise targeting than segment-level rules allow.[8][9]
The principal limitation of Generation 2 systems is temporal. The model's understanding of customer behavior is fixed at training time. A customer whose behavior changes between training cycles is scored against a model that does not yet reflect those changes. Retraining requires a human to recognize that drift may have occurred, schedule a retraining run, validate the update, and redeploy it. The system is adaptive in design but static in operation between interventions.[8][9]
2.3 Generation 3: Self-Learning Systems
The third generation is defined by the elimination of the human retraining cycle as the mechanism for model improvement. A Self-Learning Loyalty system observes each new behavioral event — transaction, offer view, lapse signal, channel interaction, service contact — and continuously updates its model of each customer's likely next action. It does not wait for a scheduled retraining run. It updates at the speed of the event stream.[3][10]
This distinction has several consequences. First, the system can detect behavioral pattern shifts in near real time rather than at the end of a quarterly retraining cycle. Second, it can discover predictive patterns that no human designer specified. Third, its predictions improve continuously as it accumulates more behavioral data, rather than performing at training-time accuracy until the next refresh.[8][10]
The most intuitive illustration of this principle at scale is Amazon's logistics model. Amazon does not predict with certainty that a particular customer will order a particular product on a particular day. It predicts that enough customers sharing a behavioral signature will order a given product within a defined time window that pre-positioning inventory is economically justified. The individual prediction is probabilistic. The aggregate prediction is reliable enough to build a logistics system around. The customer experience — same-day delivery that feels inevitable — is the visible result of large-scale statistical learning applied to individual activation.
Self-Learning Loyalty applies the same principle to customer engagement. The system does not need certainty that a specific member will lapse in seven days. It needs to learn that customers sharing a behavioral signature lapse at a rate high enough that an intervention at day five is economically justified. Population-level learning produces individual-level activation and measurable incremental outcomes.[1][2]
The financial proof standard this creates is incrementality: the difference between outcomes for members who received the intervention and a comparable group that did not. Formally, this is a causal inference problem: estimating the treatment effect of a loyalty intervention relative to the counterfactual in which the member did not receive it.[11][12] Methods such as holdout designs, propensity score matching, uplift modeling, and related causal machine learning approaches make that counterfactual estimation more rigorous than descriptive lift analysis alone.[11][12] In a Self-Learning Loyalty architecture, these causal estimates are not retrospective reporting artifacts; they become inputs into the decisioning system itself, allowing the platform to learn which interventions produce genuine incremental value and to refine future decisions accordingly.[1][2]
3. The Architecture of Self-Learning Loyalty
Self-Learning Loyalty is not a product. It is an architectural pattern: a set of components that, when assembled, allow a loyalty program to function as a continuously improving customer intelligence system rather than a static execution engine.[3][10] Five layers are essential.
3.1 Layer 1: Event Streaming Backbone
A self-learning system requires a continuous, real-time stream of behavioral signals from every system that touches the customer. In the typical enterprise loyalty environment, these systems include the loyalty platform, the point-of-sale system, e-commerce or digital ordering channels, paid media platforms, service CRM systems, and third-party partner systems. Each system captures behavioral data. In most current architectures, much of that data is not shared across the stack in real time.[3][7]
The event streaming backbone changes this. Architectures such as Apache Kafka, AWS EventBridge, and comparable event-driven infrastructure create a customer event fabric that routes behavioral signals from source systems to the shared intelligence layer without tightly coupling each application to every other one.[3] This layer does more than move data. It establishes the temporal conditions under which self-learning is possible.
3.2 Layer 2: Unified Identity Graph
The event streaming backbone delivers signals. The identity layer determines which signals belong to the same customer. In practice, enterprise loyalty environments operate with multiple disconnected identifiers: loyalty IDs, app user IDs, digital ordering accounts, hashed media identifiers, and POS records. When those identifiers are not resolved, the self-learning system cannot construct a coherent behavioral picture of any individual customer.
A unified identity graph creates a shared customer profile accessible across connected systems and updated at the speed of the event stream. Although many enterprises possess partial identity capabilities today, the strategic challenge is not simply technical resolution. It is deciding which systems are connected, which identifiers are resolved, and how conflicts and ambiguities are handled. Those are business and governance decisions as much as engineering ones.[3][7]
3.3 Layer 3: Portfolio-Level Decisioning Engine
The decisioning layer is where self-learning intelligence operates. It receives the complete behavioral stream, resolved to unified customer identities, and applies predictive models to determine what action — if any — should be taken for each customer at each moment. These models may include churn propensity, offer responsiveness, next-best-action recommendation, suppression and fatigue management, and economic constraint logic that ensures interventions are profitable as well as behaviorally effective.[2][7]
The term "portfolio-level" is deliberate. Many loyalty environments still make decisions at the channel or platform level rather than at the customer portfolio level. The loyalty platform decides on offers, the email platform decides on campaign triggers, and the paid media system decides on audiences independently. Each may be locally rational while collectively producing noise. Portfolio-level decisioning imposes a coordinated decision across channels and systems, with the added advantage that economic constraints and causal uplift logic can be encoded at a single governance point instead of being distributed unevenly across operating teams.[7][5]
3.4 Layer 4: Activation and Orchestration
The activation layer converts decisioning outputs into customer experiences across owned and paid channels. In a self-learning architecture, execution systems receive instructions from the portfolio decisioning layer rather than generating decisions in isolation. This does not reduce the value of the loyalty platform. It specializes it. The loyalty platform continues to manage offer lifecycle execution, member record integrity, and operational delivery. The decisioning engine handles cross-system arbitration and optimization.[3][7]
3.5 Layer 5: Closed-Loop Measurement and Incrementality
The measurement layer routes outcome data — offer redemption, visit frequency change, basket size delta, churn events, service resolution, and media response — back into the decisioning engine, offer design logic, and program economics. This feedback loop is what makes the system self-improving: each observed outcome sharpens the next prediction.[1][2]
The measurement layer also produces the causal proof required by Finance. By comparing treatment and holdout outcomes or applying defensible causal estimation methods when experimentation is constrained, the organization can estimate incremental revenue, margin, and visit effects attributable to the loyalty system's actions rather than merely correlated with them.[11][12] Engagement rates measure activity. Incrementality measures causation. In a Self-Learning Loyalty system, this causal layer is not an analytical afterthought but a native part of the operating architecture.[1][2]
4. The Cold Start Problem and the Expert-Initialized Weight Set
The architectural description above presents Self-Learning Loyalty as compelling and coherent. It is. It is also incomplete as a deployment framework unless it addresses the cold start problem.
A self-learning system requires data in order to learn. Before it has observed a meaningful volume of behavioral events — before it has seen enough members lapse and not lapse, redeem and not redeem, expand basket and not expand basket — its predictions are unreliable. In a new deployment, this cold start window may extend for months before model behavior reaches operational reliability.
The cold start problem is not solely about data volume. It is also about initialization. A machine learning system does not begin as a blank slate. It begins with an architecture, a feature set, and an initial distribution of weights or assumptions about how signals relate to outcomes. Those initial conditions affect how quickly the model learns and which patterns it discovers first.
In current enterprise deployments, initialization is often controlled primarily by those who manage the data pipeline and model training process. Those practitioners may possess deep technical expertise but limited understanding of which behavioral signals are most predictive of loyalty outcomes in a specific business context. The result can be a technically competent but programmatically naive model — one that misses distinctions loyalty strategists recognize immediately, such as the difference between a strategic redeemer and an authentically engaged member, or the distinct economics of threshold behavior near tier advancement.[8][9]
A solution is proposed here in the form of expert-initialized model weights: a structured process in which loyalty domain expertise is translated into the initial signal weighting, exclusion logic, and economic constraints governing a self-learning model's first predictions. This initial configuration is not a permanent answer. It is a hypothesis informed by the strongest available expertise at deployment time. The self-learning system then tests that hypothesis against actual behavior, confirms what holds, adjusts what does not, and discovers patterns no human anticipated.
This handoff — from human expertise to machine learning to self-improving system — is the practical design challenge not yet fully addressed in public loyalty architecture literature. It is the gap that gives rise to the practitioner role defined in the next section.
5. The Data Boundary Architect: A New Practitioner Role
The expert-initialized weight set requires a practitioner capable of producing it. That practitioner must answer four questions for every Self-Learning Loyalty deployment. These are strategic and governance questions, not merely technical ones.
5.1 What Signals Does the System Observe?
The signal set defines what the self-learning system is allowed to learn from. Transaction frequency, offer redemption rate, channel interaction patterns, lapse duration, basket composition, digital ordering behavior, service contact history, and media exposure may all add predictive value. The choice of which signals to include is not simply an engineering decision. It is a loyalty strategy decision informed by prior knowledge of which behaviors historically predict incremental outcomes, and by governance judgment about which signals are appropriate to use in the client's relationship context.
5.2 What Signals Are Excluded — and Why?
Exclusions are as strategically important as inclusions. Some signals are legally constrained. Some are technically available but ethically inappropriate. Some introduce confounding effects that reduce explanatory power or bias outputs in ways the operator would not endorse. The exclusion rationale must be explicit and documented. Signal inclusion and exclusion are not only governance decisions; they also define the covariate space within which causal estimates are computed, and therefore affect the validity of uplift estimates and ROI calculations.[11][12]
5.3 How Is Customer Consent Encoded into the Learning Architecture?
A customer who enrolls in a loyalty program has not necessarily consented to every possible use of their behavior. Service complaint history, media exposure, delivery-platform interactions, and coalition-partner behavior may all be technically accessible and predictively useful. Whether they fall within the scope of consent is a legal and strategic question. Consent architecture defines which signals require which consent state, how consent is stored, how the system enforces those boundaries in real time, and how consent changes propagate through active learning.[4][13]
5.4 How Are Model Decisions Made Auditable?
A self-learning system's decisions must be explainable to multiple audiences. Marketing teams need to understand why one intervention was recommended over another. Finance needs to understand why budget was allocated to a segment. Legal and privacy teams must demonstrate that the system operates within its stated purposes and boundaries. In a continuously learning system, auditability cannot be retrofitted after deployment. It must be designed into the architecture at the outset.[4][13]
5.5 The Data Boundary Framework
These four questions — signal inclusion, signal exclusion, consent architecture, and audit design — constitute the Data Boundary Framework: the strategic governance document that specifies what a self-learning loyalty system is allowed to learn. This document becomes the foundation on which the technical team builds, the artifact that legal and privacy teams review, and the governance instrument that makes the system trustworthy enough to deploy at enterprise scale.[4][13]
The practitioner who produces this document is the Data Boundary Architect. The role name is new, but the underlying need is not. The Data Boundary Architect is a loyalty strategist whose primary deliverable has shifted from program design to governance architecture. The underlying expertise remains loyalty expertise: understanding which signals predict incremental behavior, how program economics operate, what motivates customers across segments, and how loyalty systems interact with the broader enterprise stack. What changes is the object of design. Instead of designing a fixed tier structure or benefit architecture, the practitioner designs the boundary conditions under which the self-learning system is permitted to operate.
6. The Governance Imperative: Evidence from Research and Practice
The case for the Data Boundary Architect is not solely theoretical. Across AI governance and customer intelligence practice, a widening gap has emerged between the capability to deploy intelligent systems and the organizational capability to govern them effectively.[4][13]
Gartner research projects that over 40 percent of agentic AI projects will be cancelled by 2027, with governance failure — not technical failure — as the primary cause.[4] A subsequent Gartner D&A Predictions report estimates that 50 percent of AI agent deployment failures will be attributable to insufficient governance runtime enforcement.[13]
These findings apply directly to Self-Learning Loyalty. Such systems, by definition, coordinate multiple models and multiple data sources in continuous operation. Their performance advantage depends on that breadth. Their governance risk does too. The same signals that improve predictive power can create regulatory, privacy, and trust liabilities if the system's boundaries are undefined or weakly enforced.
BCG's 2025 Personalization Index specifically identifies that brands building next-best-action engines must be powered by loyalty and consented data — naming consent architecture as a requirement for personalization at enterprise scale, not merely a compliance consideration.[5] Accenture's Technology Vision 2025 describes AI systems defining a virtuous learning loop where increasing system use leads to continuous capability improvement — while noting that this dynamic places governance and data boundary design at the center of enterprise AI strategy.[14]
The convergence of governance pressure and architectural capability creates a narrow strategic window. Firms that learn to translate loyalty expertise into formal data boundary design can occupy a differentiated position between technology vendors, generalist AI governance advisors, and traditional loyalty strategy firms. That position is defensible because it depends on domain-specific judgment, not generic policy language or commodity implementation work.[7][14]
7. Implications for Loyalty Strategy Practice
The framework described here has direct implications for loyalty consulting firms, agency partners, in-house teams, and financial planners.
7.1 For Loyalty Consulting Firms
Self-Learning Loyalty requires consulting firms to develop competencies beyond traditional program design. In addition to tier economics and benefit architecture, firms must build capability in signal boundary specification, consent-aware data design, causal incrementality measurement, and cross-system decisioning logic.[7][2]
This does not replace classic loyalty expertise. It extends it. The expert-initialized weight set depends on the same behavioral knowledge that historically informed tier design and offer architecture. The difference is that the output is no longer a static program design document. It is a governance and learning architecture that shapes what the system can learn, how it acts, and how its economic value is proven over time.
7.2 For Agency Partners
Agency networks increasingly depend on first-party data, identity resolution, and measurable audience activation. Self-Learning Loyalty strengthens this agenda because the identity and decisioning layers required for loyalty learning are also the layers that make loyalty audiences targetable and measurable in paid media contexts.[7][5]
This creates a bridge between loyalty strategy and media monetization. Once identity and behavioral learning are robust, loyalty data can support funded offers, audience packaging, and closed-loop measurement that are more economically valuable than generic segmentation. Agency partners that connect these capabilities early will sit at the intersection of loyalty, media, and AI governance rather than within one silo alone.
7.3 For In-House Loyalty Teams
For brand operators, Self-Learning Loyalty presents both opportunity and structural challenge. The opportunity is that adaptive systems can outperform fixed-rule systems over time as behavioral data accumulates and interventions are refined.[8][10] The challenge is that these systems require cross-functional governance: marketing, data and analytics, technology, legal, privacy, and Finance must coordinate around a shared operating document rather than independent workflows.
Operationally, self-learning systems also change what the business receives from loyalty data. Instead of static dashboards, internal stakeholders can receive continuously updated estimates of expected incremental visits, revenue, and margin by segment and intervention, along with scenario projections that support budget allocation and program prioritization.[1][2] In that sense, the output of a self-learning loyalty system is not merely a better campaign. It is a better commercial planning input.
7.4 Predictable Revenue and Forward-Looking Use of Loyalty Data
Self-Learning Loyalty systems do more than optimize current campaigns. They improve the predictability of future revenue streams by continuously estimating the expected causal uplift associated with different interventions, segments, and investment levels.[1][2][5] Rather than treating loyalty as a cost center with opaque impact, organizations can begin to express loyalty performance as expected incremental revenue under defined scenarios.
This has implications beyond marketing execution. For public companies, more rigorous incrementality estimates can support internal forecasting and improve confidence in the assumptions used to explain future performance to executive teams, boards, and investors.[2][5] For merchants, coalition partners, and category managers, the same estimates can improve the pricing of funded offers and partner programs by grounding those negotiations in measured expected lift rather than broad averages or anecdotal claims.[1][5]
The practical value of loyalty data therefore expands. Loyalty data no longer serves only to trigger rewards or personalize communications. In a self-learning architecture, it becomes an input to forward-looking commercial models: expected revenue contribution, expected margin lift, scenario planning, and capital allocation discussions. The stronger the causal measurement discipline, the more credibly loyalty can participate in those decisions.[1][2][5]
8. Conclusion
The loyalty industry has demonstrated across multiple technology generations that retained and better-engaged customers produce superior economic outcomes. What it has not yet fully demonstrated is the ability to convert the behavioral data generated by loyalty programs into continuously improving, causally validated, financially useful intelligence.[6][1]
Generation 1 and Generation 2 systems execute programs that humans designed. Self-Learning Loyalty systems learn what programs should become, continuously and at the speed of the event stream, without waiting for a human to recognize model drift and schedule retraining. The technical components required to do this — event streaming, unified identity, portfolio-level decisioning, activation orchestration, and closed-loop measurement — are increasingly available.[3][10]
What remains underdeveloped is the governance methodology that makes such systems trustworthy and deployable at enterprise scale. That methodology is the Data Boundary Framework. The practitioner who produces it is the Data Boundary Architect. Together, they provide the missing link between loyalty strategy expertise and self-learning system design.[4][13]
By embedding causal incrementality measurement into a self-learning architecture, loyalty programs become more than engagement mechanisms. They become systems for estimating expected commercial impact with greater rigor and frequency, making loyalty data more useful to Finance, merchant teams, and executive planners than traditional program architectures allow.[1][2][5] The firms and practitioners that define this methodology early will be positioned not only to improve loyalty performance, but to shape how loyalty intelligence is governed, priced, and valued in the next phase of enterprise customer strategy.
References
- Reichheld, F., & Teal, T. (2001). Loyalty Rules! How Today’s Leaders Build Lasting Relationships. Harvard Business School Press.
- McKinsey & Company. (2021). The value of getting personalization right — or wrong — is multiplying: Next in Personalization 2021. McKinsey & Company.
- McKinsey & Company. (2024). Members Only: The growing value of personalized loyalty programs. McKinsey & Company.
- Gartner. (2025). Gartner Peer Insights: AI Governance in Agentic Deployments. Gartner Research.
- BCG. (2025). BCG Personalization Index 2025: Turning Data into Personalized Experiences. Boston Consulting Group.
- Reichheld, F. F. (1996). The Loyalty Effect: The Hidden Force Behind Growth, Profits, and Lasting Value. Harvard Business School Press.
- Roosta, A., Sadjadi, S. J., & Makui, A. (2025). Omnichannel loyalty optimization using reinforcement and adaptive learning. PLOS ONE.
- Aluri, A., Price, B. S., & McIntyre, N. H. (2018). Using machine learning to reinvent customer experience within hospitality. International Journal of Hospitality Management, 73, 150–168.
- Lin, J. (2025). Consumer behavior prediction in loyalty programs using gradient boosted decision trees. PLOS ONE.
- Roosta, A., Sadjadi, S. J., & Makui, A. (2025). Reinforcement learning and adaptive learning for omnichannel loyalty prediction. PLOS ONE.
- Gutierrez, P., & Gérardy, J. Y. (2017). Causal inference and uplift modeling: A review of the literature. Proceedings of the 3rd International Conference on Predictive Applications and APIs, 67, 1–13.
- Athey, S., & Imbens, G. W. (2016). Recursive partitioning for heterogeneous causal effects. Proceedings of the National Academy of Sciences, 113(27), 7353–7360.
- Gartner. (2026). D&A Predictions 2026: AI Agent Deployment Governance. Gartner Research.
- Accenture. (2025). Accenture Technology Vision 2025: The Age of Intelligence. Accenture.
Published on SSRN
Edgett, J. (2026). Self-Learning Loyalty: Adaptive AI Architecture, Causal Incrementality, and the Data Boundary Architect Role in Enterprise Customer Intelligence Systems. SSRN Working Paper 6654158.
View on SSRN →
Jim Edgett
Jim Edgett is the founder of Journey Gain, which builds AI-enabled identity and loyalty systems for QSR and retail operators. He has spent 20+ years at the intersection of loyalty, first-party data, retail media, and CX — including GameStop’s 65M-member loyalty ecosystem, Salesforce/IBM engagements with Dick’s Sporting Goods and TaylorMade, and advisory work with multi-location restaurant and retail brands.