_ _ _ _ _ ___ _ _ ___
| | | (_) | | | | / (_) | | | | |_ |
| |_| |_ __ _| |__ ___ _ __ ______| |/ / _ _ __ __| | ___ __| |______ | |
| _ | |/ _` | '_ \ / _ \ '__|______| \| | '_ \ / _` |/ _ \/ _` |______| | |
| | | | | (_| | | | | __/ | | |\ \ | | | | (_| | __/ (_| | /\__/ /
\_| |_/_|\__, |_| |_|\___|_| \_| \_/_|_| |_|\__,_|\___|\__,_| \____/
__/ |
|___/
Bringing Higher-Kinded Types and Composable Optics to Java
Higher-Kinded-J brings two powerful functional programming toolsets to Java, enabling developers to write more abstract, composable, and robust code:
- A Higher-Kinded Types (HKT) Simulation to abstract over computational contexts like
Optional,List, orCompletableFuture. - A powerful Optics Library to abstract over immutable data structures, with boilerplate-free code generation.
These work together to solve common Java pain points in a functional, type-safe way.
Two Pillars of Functional Programming
1: A Higher-Kinded Types Simulation ⚙️
Java's type system lacks native support for Higher-Kinded Types, making it difficult to write code that abstracts over "container" types. We can't easily define a generic function that works identically for List<A>, Optional<A>, and CompletableFuture<A>.
Higher-Kinded-J simulates HKTs in Java using a technique inspired by defunctionalisation. This unlocks the ability to use common functional abstractions like Functor, Applicative, and Monad generically across different data types.
With HKTs, you can:
- Abstract Over Context: Write logic that works polymorphically over different computational contexts (optionality, asynchrony, error handling, collections).
- Leverage Typeclasses: Consistently apply powerful patterns like
map,flatMap,sequence, andtraverseacross diverse data types. - Build Adaptable Pipelines: Use profunctors to create flexible data transformation pipelines that adapt to different input and output formats.
- Manage Effects: Use provided monads like
IO,Either,Validated, andStateto build robust, composable workflows.
2: A Powerful Optics Library 🔎
Working with immutable data structures, like Java records, is great for safety but leads to verbose "copy-and-update" logic for nested data.
Higher-Kinded-J provides a full-featured Optics library that treats data access as a first-class value. An optic is a composable, functional getter/setter that lets you "zoom in" on a piece of data within a larger structure.
With Optics, you can:
- Eliminate Boilerplate: An annotation processor generates
Lens,Prism,Iso,Fold, andTraversaloptics for your records and sealed interfaces automatically. - Perform Deep Updates Effortlessly: Compose optics to create a path deep into a nested structure and perform immutable updates in a single, readable line.
- Decouple Data and Operations: Model your data cleanly as immutable records, while defining complex, reusable operations separately as optics.
- Perform Effectful Updates: The Optics library is built on top of the HKT simulation, allowing you to perform failable, asynchronous, or stateful updates using the powerful
modifyFmethod. - Adapt to Different Data Types: Every optic is a profunctor, meaning it can be adapted to work with different source and target types using
contramap,map, anddimapoperations. This provides incredible flexibility for API integration, legacy system support, and data format transformations. - Query with Precision: Use filtered traversals to declaratively focus on elements matching predicates, and indexed optics to perform position-aware transformations with full index tracking.
- Java-Friendly Syntax: Leverage the fluent API for discoverable, readable optic operations, or use the Free Monad DSL to build composable optic programs with multiple execution strategies (direct, logging, validation).
Getting Started
note
Before diving in, ensure you have the following: Java Development Kit (JDK): Version 24 or later. The library makes use of features available in this version.
The project is modular. To use it, add the relevant dependencies to your build.gradle or pom.xml. The use of an annotation processor helps to automatically generate the required boilerplate for Optics and other patterns.
For HKTs:
// build.gradle.kts
implementation("io.github.higher-kinded-j:hkj-core:LATEST_VERSION")
For Optics:
// build.gradle.kts
implementation("io.github.higher-kinded-j:hkj-core:LATEST_VERSION")
annotationProcessor("io.github.higher-kinded-j:hkj-processor:LATEST_VERSION")
annotationProcessor("io.github.higher-kinded-j:hkj-processor-plugins:LATEST_VERSION")
For SNAPSHOTS:
repositories {
mavenCentral()
maven {
url= uri("https://central.sonatype.com/repository/maven-snapshots/")
}
}
Documentation
We recommend following the documentation in order to get a full understanding of the library's capabilities.
Optics Guides
This series provides a practical, step-by-step introduction to solving real-world problems with optics.
- An Introduction to Optics: Learn what optics are and the problems they solve.
- Practical Guide: Lenses: A deep dive into using
Lensfor nested immutable updates. - Practical Guide: Prisms: Learn how to use
Prismto safely work withsealed interface(sum types). - Practical Guide: Isos: Understand how
Isoprovides a bridge between equivalent data types. - Practical Guide: Traversals: Master the
Traversalfor performing bulk updates on collections. - Profunctor Optics: Discover how to adapt optics to work with different data types and structures.
- Capstone Example: Deep Validation: A complete example that composes multiple optics to solve a complex problem.
- Practical Guide: Filtered Optics: Learn how to compose predicates with optics for declarative filtering.
- Practical Guide: Indexed Optics: Discover position-aware transformations with index tracking.
- Practical Guide: Limiting Traversals: Master traversals that focus on portions of lists.
- Fluent API for Optics: Explore Java-friendly syntax for optic operations.
- Free Monad DSL: Build composable optic programs as data structures.
- Optic Interpreters: Execute optic programs with different strategies (logging, validation).
HKT Core Concepts
For users who want to understand the underlying HKT simulation that powers the optics library or use monads directly.
- An Introduction to HKTs: Learn what HKTs are and the problems they solve.
- Core Concepts: Understand
Kind, Witness Types, and Type Classes (Functor,Monad). - Supported Types: See which types are simulated and have typeclass instances.
- Usage Guide: Learn the practical steps for using the HKT simulation directly.
- Examples of how to use HKTs: Practical Examples of how to use the Monads.
- Order Example Walkthrough: A detailed example of building a robust workflow with monad transformers.
- Extending Higher-Kinded-J: Learn how to add HKT support for your own custom types.
History
Higher-Kinded-J evolved from a simulation that was originally created for the blog post Higher Kinded Types with Java and Scala that explored Higher-Kinded types and their lack of support in Java. The blog post discussed a process called defuctionalisation that could be used to simulate Higher-Kinded types in Java. Since then Higher-Kinded-J has grown into something altogether more useful supporting more functional patterns.
Introduction to Higher-Kinded Types

- The analogy between higher-order functions and higher-kinded types
- Why Java's type system limitations necessitate HKT simulation
- How abstractions over "container" types enable more reusable code
- The difference between first-order types, generic types, and higher-kinded types
- Real-world benefits: less boilerplate, more abstraction, better composition
We can think about Higher-Kinded Types (HKT) by making an analogy with Higher-Order Functions (HOF).
higher-kinded types are to types what higher-order functions are to functions.
They both represent a powerful form of abstraction, just at different levels.
The Meaning of "Regular" and "Higher-Order"
Functions model Behaviour
- First-Order (Regular) Function: This kind of function operates on simple values. It takes a value(s) like a
intand returns a value.
// Take a value and return a value
int square(int num) {
return num * num;
}
- Higher-Order Function: This kind of function operates on other functions. It can take functions as arguments and or return a new function as the result. It abstracts over the behaviour.
// Takes a Set of type A and a function fn that maps types of A to B,
// returns a new Set of type B
<A, B> Set<B> mapper(Set<A> list, Function<A, B> fn) {
// ... applies fn to each element of the set
}
mapper is a higher-order function because it takes the function fn as an argument.
Types model Structure
- First-Order (Regular) Type: A simple, concrete type like
int, orSet<Double>represents a specific kind of data. - Higher-Kinded Type (HKT): This is a "type that operates on types." More accurately, it's a generic type constructor that can itself be treated as a type parameter. It abstracts over structure or computational context.
Let us consider Set<T>. Set itself without the T, is a type constructor. Think of it as a "function" for types: Supply it a type (like Integer), and it produces a new concrete type Set<Integer>.
A higher-kinded type allows us to write code that is generic over Set itself, or List, or CompletableFuture.
Generic code in Practice
Functions
Without Higher-Order Functions:
To apply different operations to a list, we would need to write separate loops for each one.
List<String> results = new ArrayList<>();
for (int i : numbers) {
results.add(intToString(i)); // Behavior is hardcoded
}
With Higher-Order Functions:
We abstract the behaviour into a function and pass it in. This is much more flexible.
// A map for List
<A, B> List<B> mapList(List<A> list, Function<A, B> f);
// A map for Optional
<A, B> Optional<B> mapOptional(Optional<A> opt, Function<A, B> f);
// A map for CompletableFuture
<A, B> CompletableFuture<B> mapFuture(CompletableFuture<A> future, Function<A, B> f);
Notice the repeated pattern: the core logic is the same, but the "container" is different.
With Higher-Kinded Types:
With Higher-Kinded-J we can abstract over the container F itself. This allows us to write one single, generic map function that works for any container structure or computational context that can be mapped over (i.e., any Functor). This is precisely what the GenericExample.java demonstrates.
// F is a "type variable" that stands for List, Optional, etc.
// This is a function generic over the container F.
public static <F, A, B> Kind<F, B> map(
Functor<F> functorInstance, // The implementation for F
Kind<F, A> kindBox, // The container with a value
Function<A, B> f) { // The behaviour to apply
return functorInstance.map(f, kindBox);
}
Here, Kind<F, A> is the higher-kinded type that represents "some container F holding a value of type A."
Both concepts allow you to write more generic and reusable code by parametrising things that are normally fixed. Higher-order functions parametrise behaviour, while higher-kinded types parametrise the structure that contains the behaviour.
We will discuss the GenericExample.java in detail later, but you can take a peek at the code here
The Core Idea: Abstraction over Containers
In short: a higher-kinded type is a way to be generic over the container type itself.
Think about the different "container" types you use every day in Java: List<T>, Optional<T>, Future<T>, Set<T>. All of these are generic containers that hold a value of type T.
The problem is that you can't write a single method in Java that accepts any of these containers and performs an action, because List, Optional, and Future don't share a useful common interface. A higher-kinded type solves this by letting you write code that works with F<T>, where F itself is a variable representing the container type (List, Optional, etc.).
Building Up from Java Generics
Level 1: Concrete Types (like values)
A normal, complete type is like a value. It's a "thing".
String myString; // A concrete type
List<Integer> myIntList; // Also a concrete type (a List of Integers)
Level 2: Generic Types (like functions)
A generic type definition like List<T> is not a complete type. It's a type constructor. It's like a function at the type level: you give it a type (e.g., String), and it produces a concrete type (List<String>).
// List<T> is a "type function" that takes one parameter, T.
// We can call it a type of kind: * -> *
// (It takes one concrete type to produce one concrete type)
You can't declare a variable of type List. You must provide the type parameter T.
Level 3: Higher-Kinded Types (like functions that take other functions)
This is the part Java doesn't support directly. A higher-kinded type is a construct that is generic over the type constructor itself. Imagine you want to write a single map function that works on any container. You want to write code that says: "Given any container F holding type A, and a function to turn an A into a B, I will give you back a container F holding type B." In imaginary Java syntax, it would look like this:
// THIS IS NOT REAL JAVA SYNTAX
public <F<?>, A, B> F<B> map(F<A> container, Function<A, B> func);
Here, F is the higher-kinded type parameter. It's a variable that can stand for List, Optional, Future, or any other * -> * type constructor.
A Practical Analogy: The Shipping Company

Think of it like working at a shipping company.
A concrete type List<String> is a "Cardboard Box full of Apples".
A generic type List<T> is a blueprint for a "Cardboard Box" that can hold anything (T).
Now, you want to write a single set of instructions (a function) for your robotic arm called addInsuranceLabel. You want these instructions to work on any kind of container.
Without HKTs (The Java Way): You have to write separate instructions for each container type.
addInsuranceToCardboardBox(CardboardBox<T> box, ...)
addInsuranceToPlasticCrate(PlasticCrate<T> crate, ...)
addInsuranceToMetalCase(MetalCase<T> case, ...)
With HKTs (The Abstract Way): You write one generic set of instructions.
addInsuranceToContainer(Container<T> container, ...)
A higher-kinded type is the concept of being able to write code that refers to Container<T> — an abstraction over the container or "context" that holds the data.
Higher-Kinded-J simulates HKTs in Java using a technique inspired by defunctionalisation. It allows you to define and use common functional abstractions like Functor, Applicative, and Monad (including MonadError) in a way that works generically across different simulated type constructors.
Why bother? Higher-Kinded-J unlocks several benefits:
- Write Abstract Code: Define functions and logic that operate polymorphically over different computational contexts (e.g., handle optionality, asynchronous operations, error handling, side effects, or collections using the same core logic).
- Leverage Functional Patterns: Consistently apply powerful patterns like
map,flatMap,ap,sequence,traverse, and monadic error handling (raiseError,handleErrorWith) across diverse data types. - Build Composable Systems: Create complex workflows and abstractions by composing smaller, generic pieces, as demonstrated in the included Order Processing Example.
- Understand HKT Concepts: Provides a practical, hands-on way to understand HKTs and type classes even within Java's limitations.
- Lay the Foundations: Building on HKTs unlocks the possibilities for advanced abstractions like Optics, which provide composable ways to access and modify nested data structures.
While Higher-Kinded-J introduces some boilerplate compared to languages with native HKT support, it offers a valuable way to explore these powerful functional programming concepts in Java.
Core Concepts of Higher-Kinded-J

- How the Kind<F, A> interface simulates higher-kinded types in Java
- The role of witness types in representing type constructors
- Understanding defunctionalisation and how it enables HKT simulation
- The difference between internal library types and external Java types
- How type classes provide generic operations across different container types
Higher-Kinded-J employs several key components to emulate Higher-Kinded Types (HKTs) and associated functional type classes in Java. Understanding these is crucial for using and extending the library.
Feel free to skip ahead to the examples and come back later for the theory
1. The HKT Problem in Java
As we've already discussed, Java's type system lacks native support for Higher-Kinded Types. We can easily parametrise a type by another type (like List<String>), but we cannot easily parametrise a type or method by a type constructor itself (like F<_>). We can't write void process<F<_>>(F<Integer> data) to mean "process any container F of Integers".
You'll often see Higher-Kinded Types represented with an underscore, such as F<_> (e.g., List<_>, Optional<_>). This notation, borrowed from languages like Scala, represents a "type constructor"—a type that is waiting for a type parameter. It's important to note that this underscore is a conceptual placeholder and is not the same as Java's ? wildcard, which is used for instantiated types. Our library provides a way to simulate this F<_> concept in Java.
2. The Kind<F, A> Bridge
At the very centre of the library are the Kind interfaces, which make higher-kinded types possible in Java.
-
Kind<F, A>: This is the foundational interface that emulates a higher-kinded type. It represents a typeFthat is generic over a typeA. For example,Kind<ListKind.Witness, String>represents aList<String>. You will see this interface used everywhere as the common currency for all our functional abstractions. -
Kind2<F, A, B>: This interface extends the concept to types that take two type parameters, such asFunction<A, B>orEither<L, R>. For example,Kind2<FunctionKind.Witness, String, Integer>represents aFunction<String, Integer>. This is essential for working with profunctors and other dual-parameter abstractions.
- Purpose: To simulate the application of a type constructor
F(likeList,Optional,IO) to a type argumentA(likeString,Integer), representing the concept ofF<A>. F(Witness Type): This is the crucial part of the simulation. SinceF<_>isn't a real Java type parameter, we use a marker type (often an empty interface specific to the constructor) as a "witness" or stand-in forF. Examples:ListKind<ListKind.Witness>represents theListtype constructor.OptionalKind<OptionalKind.Witness>represents theOptionaltype constructor.EitherKind.Witness<L>represents theEither<L, _>type constructor (whereLis fixed).IOKind<IOKind.Witness>represents theIOtype constructor.
A(Type Argument): The concrete type contained within or parametrised by the constructor (e.g.,IntegerinList<Integer>).- How it Works: The library provides a seamless bridge between a standard java type, like a
java.util.List<Integer>and itsKindrepresentationKind<ListKind.Witness, Integer>. Instead of requiring you to manually wrap objects, this conversion is handled by static helper methods, typicallywidenandnarrow.- To treat a
List<Integer>as aKind, you use a helper function likeLIST.widen(). - This
Kindobject can then be passed to generic functions (such asmaporflatMapfrom aFunctororMonadinstance) that expectKind<F, A>.
- To treat a
- Reference:
Kind.java
For quick definitions of HKT concepts like Kind, Witness Types, and Defunctionalisation, see the Glossary.
3. Type Classes (Functor, Applicative, Monad, MonadError)
These are interfaces that define standard functional operations that work generically over any simulated type constructor F (represented by its witness type) for which an instance of the type class exists. They operate on Kind<F, A> objects.
Functor<F>:- Defines
map(Function<A, B> f, Kind<F, A> fa): Applies a functionf: A -> Bto the value(s) inside the contextFwithout changing the context's structure, resulting in aKind<F, B>. ThinkList.map,Optional.map. - Laws: Identity (
map(id) == id), Composition (map(g.compose(f)) == map(g).compose(map(f))). - Reference:
Functor.java
- Defines
Applicative<F>:- Extends
Functor<F>. - Adds
of(A value): Lifts a pure valueAinto the contextF, creating aKind<F, A>. (e.g.,1becomesOptional.of(1)wrapped inKind). - Adds
ap(Kind<F, Function<A, B>> ff, Kind<F, A> fa): Applies a function wrapped in contextFto a value wrapped in contextF, returning aKind<F, B>. This enables combining multiple independent values within the context. - Provides default
mapNmethods (e.g.,map2,map3) built uponapandmap. - Laws: Identity, Homomorphism, Interchange, Composition.
- Reference:
Applicative.java
- Extends
Monad<F>:- Extends
Applicative<F>. - Adds
flatMap(Function<A, Kind<F, B>> f, Kind<F, A> ma): Sequences operations within the contextF. Takes a valueAfrom contextF, applies a functionfthat returns a new contextKind<F, B>, and returns the result flattened into a singleKind<F, B>. Essential for chaining dependent computations (e.g., chainingOptionalcalls, sequencingCompletableFutures, combiningIOactions). Also known in functional languages asbindor>>=. - Provides default
flatMapNmethods (e.g.,flatMap2,flatMap3,flatMap4,flatMap5) for combining multiple monadic values with an effectful function. These methods sequence operations where the combining function itself returns a monadic value, unlikemapNwhich uses a pure function. - Laws: Left Identity, Right Identity, Associativity.
- Reference:
Monad.java
- Extends
MonadError<F, E>:- Extends
Monad<F>. - Adds error handling capabilities for contexts
Fthat have a defined error typeE. - Adds
raiseError(E error): Lifts an errorEinto the contextF, creating aKind<F, A>representing the error state (e.g.,Either.Left,Try.Failureor failedCompletableFuture). - Adds
handleErrorWith(Kind<F, A> ma, Function<E, Kind<F, A>> handler): Allows recovering from an error stateEby providing a function that takes the error and returns a new contextKind<F, A>. - Provides default recovery methods like
handleError,recover,recoverWith. - Reference:
MonadError.java
- Extends
4. Defunctionalisation (Per Type Constructor)
For each Java type constructor (like List, Optional, IO) you want to simulate as a Higher-Kinded Type, a specific pattern involving several components is used. The exact implementation differs slightly depending on whether the type is defined within the Higher-Kinded-J library (e.g., Id, Maybe, IO, monad transformers) or if it's an external type (e.g., java.util.List, java.util.Optional, java.util.concurrent.CompletableFuture).
Common Components:
-
The
XxxKindInterface: A specific marker interface, for example,OptionalKind<A>. This interface extendsKind<F, A>, whereFis the witness type representing the type constructor.- Example:
public interface OptionalKind<A> extends Kind<OptionalKind.Witness, A> { /* ... Witness class ... */ } - The
Witness(e.g.,OptionalKind.Witness) is a static nested final class (or a separate, accessible class) withinOptionalKind. ThisWitnesstype is what's used as theFparameter in generic type classes likeMonad<F>.
- Example:
-
The
KindHelperClass (e.g.,OptionalKindHelper): A crucial utilitywidenandnarrowmethods:widen(...): Converts the standard Java type (e.g.,Optional<String>) into itsKind<F, A>representation.narrow(Kind<F, A> kind): Converts theKind<F, A>representation back to the underlying Java type (e.g.,Optional<String>).- Crucially, this method throws
KindUnwrapExceptionif the inputkindis structurally invalid (e.g.,null, the wrongKindtype, or (for holder-based types) aHoldercontainingnullwhere it shouldn't). This ensures robustness.
- Crucially, this method throws
- May contain other convenience factory methods.
-
Type Class Instance(s): Concrete classes implementing
Functor<F>,Monad<F>, etc., for the specific witness typeF(e.g.,OptionalMonad implements Monad<OptionalKind.Witness>). These instances use theKindHelper'swidenandnarrowmethods to operate on the underlying Java types.
External Types:
- For Types Defined Within Higher-Kinded-J (e.g.,
Id,IO,Maybe,Either, Monad Transformers likeEitherT):- These types are designed to directly participate in the HKT simulation.
- The type itself (e.g.,
Id<A>,IO<A>,Just<T>,Either.Right<L,R>) will directly implement its correspondingXxxKindinterface (e.g.,Id<A> implements IdKind<A>,IO<A> extends IOKind<A>,Just<T> implements MaybeKind<T>,Either.Right<L,R> implements EitherKind<L,R>). - In this case, a separate
Holderrecord is not needed for the primarywiden/narrowmechanism in theKindHelper. XxxKindHelper.widen(IO<A> io)would effectively be a type cast (after null checks) toKind<IOKind.Witness, A>becauseIO<A>is already anIOKind<A>.XxxKindHelper.narrow(Kind<IOKind.Witness, A> kind)would checkinstanceof IOand perform a cast.- This approach provides zero runtime overhead for widen/narrow operations (no wrapper object allocation) and improved debugging experience (actual types visible in stack traces).
This distinction is important for understanding how wrap and unwrap function for different types. However, from the perspective of a user of a type class instance (like OptionalMonad), the interaction remains consistent: you provide a Kind object, and the type class instance handles the necessary operations.
5. The Unit Type
In functional programming, it's common to have computations or functions that perform an action (often a side effect) but do not produce a specific, meaningful result value. In Java, methods that don't return a value use the void keyword. However, void is not a first-class type and cannot be used as a generic type parameter A in Kind<F, A>.
Higher-Kinded-J provides the org.higherkindedj.hkt.Unit type to address this.
- Purpose:
Unitis a type that has exactly one value,Unit.INSTANCE. It is used to represent the successful completion of an operation that doesn't yield any other specific information. Think of it as a functional equivalent ofvoid, but usable as a generic type. - Usage in HKT:
- When a monadic action
Kind<F, A>completes successfully but has no specific value to return (e.g., anIOaction that prints to the console),Acan beUnit. The action would then beKind<F, Unit>, and its successful result would conceptually beUnit.INSTANCE. For example,IO<Unit>for a print operation. - In
MonadError<F, E>, if the error stateEsimply represents an absence or a failure without specific details (likeOptional.empty()orMaybe.Nothing()),Unitcan be used as the type forE. TheraiseErrormethod would then be called withUnit.INSTANCE. For instance,OptionalMonadimplementsMonadError<OptionalKind.Witness, Unit>, andMaybeMonadimplementsMonadError<MaybeKind.Witness, Unit>.
- When a monadic action
- Example:
// An IO action that just performs a side effect (printing) Kind<IOKind.Witness, Unit> printAction = IOKindHelper.delay(() -> { System.out.println("Effect executed!"); return Unit.INSTANCE; // Explicitly return Unit.INSTANCE }); IOKindHelper.unsafeRunSync(printAction); // Executes the print // Optional treated as MonadError<..., Unit> OptionalMonad optionalMonad = OptionalMonad.INSTANCE; Kind<OptionalKind.Witness, String> emptyOptional = optionalMonad.raiseError(Unit.INSTANCE); // Creates Optional.empty() - Reference:
Unit.java
6. Error Handling Philosophy
- Domain Errors: These are expected business-level errors or alternative outcomes. They are represented within the structure of the simulated type (e.g.,
Either.Left,Maybe.Nothing,Try.Failure, a failedCompletableFuture, potentially a specific result type withinIO). These are handled using the type's specific methods orMonadErrorcapabilities (handleErrorWith,recover,fold,orElse, etc.) after successfully unwrapping theKind. - Simulation Errors (
KindUnwrapException): These indicate a problem with the HKT simulation itself – usually a programming error. Examples include passingnulltounwrap, passing aListKindtoOptionalKindHelper.unwrap, or (if it were possible) having aHolderrecord contain anullreference to the underlying Java object it's supposed to hold. These are signalled by throwing the uncheckedKindUnwrapExceptionfromunwrapmethods to clearly distinguish infrastructure issues from domain errors. You typically shouldn't need to catchKindUnwrapExceptionunless debugging the simulation usage itself.
Usage Guide: Working with Higher-Kinded-J

- The five-step workflow for using Higher-Kinded-J effectively
- How to identify the right context (witness type) for your use case
- Using widen() and narrow() to convert between Java types and Kind representations
- When and how to handle KindUnwrapException safely
- Writing generic functions that work with any Functor or Monad
This guide explains the step-by-step process of using Higher-Kinded-J's simulated Higher-Kinded Types (HKTs) and associated type classes like Functor, Applicative, Monad, and MonadError.
Core Workflow
The general process involves these steps:
Determine which type constructor (computational context) you want to work with abstractly. This context is represented by its witness type.
Examples:
ListKind.Witnessforjava.util.ListOptionalKind.Witnessforjava.util.OptionalMaybeKind.Witnessfor the customMaybetypeEitherKind.Witness<L>for the customEither<L, R>type (whereLis fixed)TryKind.Witnessfor the customTrytypeCompletableFutureKind.Witnessforjava.util.concurrent.CompletableFutureIOKind.Witnessfor the customIOtypeLazyKind.Witnessfor the customLazytypeReaderKind.Witness<R_ENV>for the customReader<R_ENV, A>typeStateKind.Witness<S>for the customState<S, A>typeWriterKind.Witness<W>for the customWriter<W, A>type- For transformers, e.g.,
EitherTKind.Witness<F_OUTER_WITNESS, L_ERROR>
Obtain an instance of the required type class (Functor<F_WITNESS>, Applicative<F_WITNESS>, Monad<F_WITNESS>, MonadError<F_WITNESS, E>) for your chosen context's witness type F_WITNESS.
These are concrete classes provided in the corresponding package.
Examples:
Optional:OptionalMonad optionalMonad = OptionalMonad.INSTANCE;(This implementsMonadError<OptionalKind.Witness, Unit>)List:ListMonad listMonad = ListMonad.INSTANCE;(This implementsMonad<ListKind.Witness>)CompletableFuture:CompletableFutureMonad futureMonad = CompletableFutureMonad.INSTANCE;(This implementsMonadError<CompletableFutureKind.Witness, Throwable>)Either<String, ?>:EitherMonad<String> eitherMonad = EitherMonad.instance();(This implementsMonadError<EitherKind.Witness<String>, String>)IO:IOMonad ioMonad = IOMonad.INSTANCE;(This implementsMonad<IOKind.Witness>)Writer<String, ?>:WriterMonad<String> writerMonad = new WriterMonad<>(new StringMonoid());(This implementsMonad<WriterKind.Witness<String>>)
Convert your standard Java object (e.g., a List<Integer>, an Optional<String>, an IO<String>) into Higher-Kinded-J's Kind representation using the widen instance method from the corresponding XxxKindHelper enum's singleton instance. You'll typically use a static import for the singleton instance for brevity.
import static org.higherkindedj.hkt.optional.OptionalKindHelper.OPTIONAL;
// ...
Optional<String> myOptional = Optional.of("test");
// Widen it into the Higher-Kinded-J Kind type
// F_WITNESS here is OptionalKind.Witness
Kind<OptionalKind.Witness, String> optionalKind = OPTIONAL.widen(myOptional);
- Helper enums provide convenience factory methods that also return
Kindinstances, e.g.,MAYBE.just("value"),TRY.failure(ex),IO_OP.delay(() -> ...),LAZY.defer(() -> ...). Remember to import thes statically from the XxxKindHelper classes. - Note on Widening:
- For JDK types (like
List,Optional),widentypically creates an internalHolderobject that wraps the JDK type and implements the necessaryXxxKindinterface. - For library-defined types (
Id,IO,Maybe,Either,Validated, Transformers likeEitherT) that directly implement theirXxxKindinterface (which in turn extendsKind), thewidenmethod on the helper enum performs a null check and then a direct (and safe) cast to theKindtype. This provides zero runtime overhead—no wrapper object allocation needed.
- For JDK types (like
Use the methods defined by the type class interface (map, flatMap, of, ap, raiseError, handleErrorWith, etc.) by calling them on the type class instance obtained in Step 2, passing your Kind value(s) as arguments. Do not call map/flatMap directly on the Kind object itself if it's just the Kind interface. (Some concrete Kind implementations like Id or Maybe might offer direct methods, but for generic programming, use the type class instance).
import static org.higherkindedj.hkt.optional.OptionalKindHelper.OPTIONAL;
// ...
OptionalMonad optionalMonad = OptionalMonad.INSTANCE;
Kind<OptionalKind.Witness, String> optionalKind = OPTIONAL.widen(Optional.of("test")); // from previous step
// --- Using map ---
Function<String, Integer> lengthFunc = String::length;
// Apply map using the monad instance
Kind<OptionalKind.Witness, Integer> lengthKind = optionalMonad.map(lengthFunc, optionalKind);
// lengthKind now represents Kind<OptionalKind.Witness, Integer> containing Optional.of(4)
// --- Using flatMap ---
// Function A -> Kind<F_WITNESS, B>
Function<Integer, Kind<OptionalKind.Witness, String>> checkLength =
len -> OPTIONAL.widen(len > 3 ? Optional.of("Long enough") : Optional.empty());
// Apply flatMap using the monad instance
Kind<OptionalKind.Witness, String> checkedKind = optionalMonad.flatMap(checkLength, lengthKind);
// checkedKind now represents Kind<OptionalKind.Witness, String> containing Optional.of("Long enough")
// --- Using MonadError (for Optional, error type is Unit) ---
Kind<OptionalKind.Witness, String> emptyKind = optionalMonad.raiseError(Unit.INSTANCE); // Represents Optional.empty()
// Handle the empty case (error state) using handleErrorWith
Kind<OptionalKind.Witness, String> handledKind = optionalMonad.handleErrorWith(
emptyKind,
ignoredError -> OPTIONAL.widen(Optional.of("Default Value")) // Ensure recovery function also returns a Kind
);
Note: For complex chains of monadic operations, consider using For Comprehensions which provide more readable syntax than nested flatMap calls.
When you need the underlying Java value back (e.g., to return from a method boundary, perform side effects like printing or running IO), use the narrow instance method from the corresponding XxxKindHelper enum's singleton instance.
```java
import static org.higherkindedj.hkt.optional.OptionalKindHelper.OPTIONAL;
import static org.higherkindedj.hkt.io.IOKindHelper.IO_OP;
// ...
// Continuing the Optional example:
Kind<OptionalKind.Witness, String> checkedKind = /* from previous step */;
Kind<OptionalKind.Witness, String> handledKind = /* from previous step */;
Optional<String> finalOptional = OPTIONAL.narrow(checkedKind);
System.out.println("Final Optional: " + finalOptional);
// Output: Optional[Long enough]
Optional<String> handledOptional = OPTIONAL.narrow(handledKind);
System.out.println("Handled Optional: " + handledOptional);
// Output: Optional[Default Value]
// Example for IO:
IOMonad ioMonad = IOMonad.INSTANCE;
Kind<IOKind.Witness, String> ioKind = IO_OP.delay(() -> "Hello from IO!");
// Use IO_OP.delay
// unsafeRunSync is an instance method on IOKindHelper.IO_OP
String ioResult = IO_OP.unsafeRunSync(ioKind);
System.out.println(ioResult);
```
The narrow instance methods in all KindHelper enums are designed to be robust against structural errors within the HKT simulation layer.
- When it's thrown: If you pass
nulltonarrow. For external types using aHolder(likeOptionalwithOptionalHolder), if theKindinstance is not the expectedHoldertype, an exception is also thrown. For types that directly implement theirXxxKindinterface,narrowwill throw if theKindis not an instance of that specific concrete type. - What it means: This exception signals a problem with how you are using Higher-Kinded-J itself – usually a programming error in creating or passing
Kindobjects. - How to handle: You generally should not need to catch
KindUnwrapExceptionin typical application logic. Its occurrence points to a bug that needs fixing in the code using Higher-Kinded-J.
// import static org.higherkindedj.hkt.optional.OptionalKindHelper.OPTIONAL;
public void handlingUnwrapExceptions() {
try {
// ERROR: Attempting to narrow null
Optional<String> result = OPTIONAL.narrow(null);
} catch(KindUnwrapException e) {
System.err.println("Higher-Kinded-J Usage Error: " + e.getMessage());
// Example Output (message from OptionalKindHelper.INVALID_KIND_NULL_MSG):
// Usage Error: Cannot narrow null Kind for Optional
}
}
Important Distinction:
KindUnwrapException: Signals a problem with the Higher-Kinded-J structure itself (e.g., invalidKindobject passed tonarrow). Fix the code using Higher-Kinded-J.- Domain Errors / Absence: Represented within a valid
Kindstructure (e.g.,Optional.empty()widened toKind<OptionalKind.Witness, A>,Either.Leftwidened toKind<EitherKind.Witness<L>, R>). These should be handled using the monad's specific methods (orElse,fold,handleErrorWith, etc.) or by using theMonadErrormethods before narrowing back to the final Java type.
Higher-Kinded-J allows writing functions generic over the simulated type constructor (represented by its witness F_WITNESS).
// import static org.higherkindedj.hkt.list.ListKindHelper.LIST;
// import static org.higherkindedj.hkt.optional.OptionalKindHelper.OPTIONAL;
// ...
// Generic function: Applies a function within any Functor context F_WITNESS.
// Requires the specific Functor<F_WITNESS> instance to be passed in.
public static <F_WITNESS, A, B> Kind<F_WITNESS, B> mapWithFunctor(
Functor<F_WITNESS> functorInstance, // Pass the type class instance for F_WITNESS
Function<A, B> fn,
Kind<F_WITNESS, A> kindABox) {
// Use the map method from the provided Functor instance
return functorInstance.map(fn, kindABox);
}
public void genericExample() {
// Get instances of the type classes for the specific types (F_WITNESS) we want to use
ListMonad listMonad = new ListMonad(); // Implements Functor<ListKind.Witness>
OptionalMonad optionalMonad = OptionalMonad.INSTANCE; // Implements Functor<OptionalKind.Witness>
Function<Integer, Integer> doubleFn = x -> x * 2;
// --- Use with List ---
List<Integer> nums = List.of(1, 2, 3);
// Widen the List. F_WITNESS is ListKind.Witness
Kind<ListKind.Witness, Integer> listKind = LIST.widen(nums);
// Call the generic function, passing the ListMonad instance and the widened List
Kind<ListKind.Witness, Integer> doubledListKind = mapWithFunctor(listMonad, doubleFn, listKind);
System.out.println("Doubled List: " + LIST.narrow(doubledListKind)); // Output: [2, 4, 6]
// --- Use with Optional (Present) ---
Optional<Integer> optNum = Optional.of(10);
// Widen the Optional. F_WITNESS is OptionalKind.Witness
Kind<OptionalKind.Witness, Integer> optKind = OPTIONAL.widen(optNum);
// Call the generic function, passing the OptionalMonad instance and the widened Optional
Kind<OptionalKind.Witness, Integer> doubledOptKind = mapWithFunctor(optionalMonad, doubleFn, optKind);
System.out.println("Doubled Optional: " + OPTIONAL.narrow(doubledOptKind)); // Output: Optional[20]
// --- Use with Optional (Empty) ---
Optional<Integer> emptyOpt = Optional.empty();
Kind<OptionalKind.Witness, Integer> emptyOptKind = OPTIONAL.widen(emptyOpt);
// Call the generic function, map does nothing on empty
Kind<OptionalKind.Witness, Integer> doubledEmptyOptKind = mapWithFunctor(optionalMonad, doubleFn, emptyOptKind);
System.out.println("Doubled Empty Optional: " + OPTIONAL.narrow(doubledEmptyOptKind)); // Output: Optional.empty
}
Higher-Kinded Types - Basic Usage Examples
This document provides a brief summary of the example classes found in the
org.higherkindedj.example.basicpackage in the HKJ-Examples.
These examples showcase how to use various monads and monad transformers to handle common programming tasks like managing optional values, asynchronous operations, and state in a functional way.
Monads
EitherExample.java
This example demonstrates the Either monad. Either is used to represent a value that can be one of two types, typically a success value (Right) or an error value (Left).
- Key Concept: A
Eitherprovides a way to handle computations that can fail with a specific error type. - Demonstrates:
- Creating
Eitherinstances for success (Right) and failure (Left) cases. - Using
flatMapto chain operations that return anEither, short-circuiting on failure. - Using
foldto handle both theLeftandRightcases.
- Creating
// Chain operations that can fail
Either<String, Integer> result = input.flatMap(parse).flatMap(checkPositive);
// Fold to handle both outcomes
String message = result.fold(
leftValue -> "Operation failed with: " + leftValue,
rightValue -> "Operation succeeded with: " + rightValue
);
ForComprehensionExample.java
This example demonstrates how to use the For comprehension, a feature that provides a more readable, sequential syntax for composing monadic operations (equivalent to flatMap chains).
- Key Concept: A
Forcomprehension offers syntactic sugar forflatMapandmapcalls, making complex monadic workflows easier to write and understand. - Demonstrates:
- Using
For.from()to start and chain monadic operations. - Applying comprehensions to different monads like
List,Maybe, and theStateTmonad transformer. - Filtering intermediate results with
.when(). - Introducing intermediate values with
.let(). - Producing a final result with
.yield().
- Using
// A for-comprehension with List
final Kind<ListKind.Witness, String> result =
For.from(listMonad, list1)
.from(_ -> list2)
.when(t -> (t._1() + t._2()) % 2 != 0) // Filter
.let(t -> "Sum: " + (t._1() + t._2())) // Introduce new value
.yield((a, b, c) -> a + " + " + b + " = " + c); // Final result
CompletableFutureExample.java
This example covers the CompletableFuture monad. It shows how to use CompletableFuture within the Higher-Kinded-J framework to manage asynchronous computations and handle potential errors.
- Key Concept: The
CompletableFuturemonad is used to compose asynchronous operations in a non-blocking way. - Demonstrates:
- Creating
Kind-wrappedCompletableFutureinstances for success and failure. - Using
map(which corresponds tothenApply). - Using
flatMap(which corresponds tothenCompose) to chain dependent asynchronous steps. - Using
handleErrorWithto recover from exceptions that occur within the future.
- Creating
// Using handleErrorWith to recover from a failed future
Function<Throwable, Kind<CompletableFutureKind.Witness, String>> recoveryHandler =
error -> {
System.out.println("Handling error: " + error.getMessage());
return futureMonad.of("Recovered from Error");
};
Kind<CompletableFutureKind.Witness, String> recoveredFuture =
futureMonad.handleErrorWith(failedFutureKind, recoveryHandler);
IdExample.java
This example introduces the Identity (Id) monad. The Id monad is the simplest monad; it wraps a value without adding any computational context. It is primarily used to make generic code that works with any monad also work with simple, synchronous values.
- Key Concept: The
Idmonad represents a direct, synchronous computation. It wraps a value, and itsflatMapoperation simply applies the function to the value. - Demonstrates:
- Wrapping a plain value into an
Id. - Using
mapandflatMapon anIdvalue. - Its use as the underlying monad in a monad transformer stack, effectively turning
StateT<S, IdKind.Witness, A>intoState<S, A>.
- Wrapping a plain value into an
// flatMap on Id simply applies the function to the wrapped value.
Id<String> idFromOf = Id.of(42);
Id<String> directFlatMap = idFromOf.flatMap(i -> Id.of("Direct FlatMap: " + i));
// directFlatMap.value() is "Direct FlatMap: 42"
IOExample.java
This example introduces the IO monad, which is used to encapsulate side effects like reading from the console, writing to a file, or making a network request.
- Key Concept: The
IOmonad describes a computation that can perform side effects. These effects are only executed when theIOaction is explicitly run. - Demonstrates:
- Creating
IOactions that describe side effects usingdelay. - Composing
IOactions usingmapandflatMapto create more complex programs. - Executing
IOactions to produce a result usingunsafeRunSync.
- Creating
// Create an IO action to read a line from the console
Kind<IOKind.Witness, String> readLine = IO_OP.delay(() -> {
System.out.print("Enter your name: ");
try (Scanner scanner = new Scanner(System.in)) {
return scanner.nextLine();
}
});
// Execute the action to get the result
String name = IO_OP.unsafeRunSync(readLine);
LazyExample.java
This example covers the Lazy monad. It's used to defer a computation until its result is explicitly requested. The result is then memoized (cached) so the computation is only executed once.
- Key Concept: A
Lazycomputation is not executed when it is created, but only whenforce()is called. The result (or exception) is then stored for subsequent calls. - Demonstrates:
- Creating a deferred computation with
LAZY.defer(). - Forcing evaluation with
LAZY.force(). - How results are memoized, preventing re-computation.
- Using
mapandflatMapto build chains of lazy operations.
- Creating a deferred computation with
// Defer a computation
java.util.concurrent.atomic.AtomicInteger counter = new java.util.concurrent.atomic.AtomicInteger(0);
Kind<LazyKind.Witness, String> deferredLazy = LAZY.defer(() -> {
counter.incrementAndGet();
return "Computed Value";
});
// The computation only runs when force() is called
System.out.println(LAZY.force(deferredLazy)); // counter becomes 1
System.out.println(LAZY.force(deferredLazy)); // result is from cache, counter remains 1
ListMonadExample.java
This example demonstrates the List monad. It shows how to perform monadic operations on a standard Java List, treating it as a context that can hold zero or more results.
- Key Concept: The
Listmonad represents non-deterministic computation, where an operation can produce multiple results. - Demonstrates:
- Wrapping a
Listinto aKind<ListKind.Witness, A>. - Using
mapto transform every element in the list. - Using
flatMapto apply a function that returns a list to each element, and then flattening the result.
- Wrapping a
// A function that returns multiple results for even numbers
Function<Integer, Kind<ListKind.Witness, Integer>> duplicateIfEven =
n -> {
if (n % 2 == 0) {
return LIST.widen(Arrays.asList(n, n * 10));
} else {
return LIST.widen(List.of()); // Empty list for odd numbers
}
};
// flatMap applies the function and flattens the resulting lists
Kind<ListKind.Witness, Integer> flatMappedKind = listMonad.flatMap(duplicateIfEven, numbersKind);
MaybeExample.java
This example covers the Maybe monad. Maybe is a type that represents an optional value, similar to Java's Optional, but designed to be used as a monad within the Higher-Kinded-J ecosystem. It has two cases: Just<A> (a value is present) and Nothing (a value is absent).
- Key Concept: The
Maybemonad provides a way to represent computations that may or may not return a value, explicitly handling the absence of a value. - Demonstrates:
- Creating
JustandNothinginstances. - Using
mapto transform aJustvalue. - Using
flatMapto chain operations that return aMaybe. - Handling the
Nothingcase usinghandleErrorWith.
- Creating
// flatMap to parse a string, which can result in Nothing
Function<String, Kind<MaybeKind.Witness, Integer>> parseString =
s -> {
try {
return MAYBE.just(Integer.parseInt(s));
} catch (NumberFormatException e) {
return MAYBE.nothing();
}
};
OptionalExample.java
This example introduces the Optional monad. It demonstrates how to wrap Java's Optional in a Kind to work with it in a monadic way, allowing for chaining of operations and explicit error handling.
- Key Concept: The
Optionalmonad provides a way to represent computations that may or may not return a value. - Demonstrates:
- Wrapping
Optionalinstances into aKind<OptionalKind.Witness, A>. - Using
mapto transform the value inside a presentOptional. - Using
flatMapto chain operations that returnOptional. - Using
handleErrorWithto provide a default value when theOptionalis empty.
- Wrapping
// Using flatMap to parse a string to an integer, which may fail
Function<String, Kind<OptionalKind.Witness, Integer>> parseToIntKind =
s -> {
try {
return OPTIONAL.widen(Optional.of(Integer.parseInt(s)));
} catch (NumberFormatException e) {
return OPTIONAL.widen(Optional.empty());
}
};
Kind<OptionalKind.Witness, Integer> parsedPresent =
optionalMonad.flatMap(parseToIntKind, presentInput);
ReaderExample.java
This example introduces the Reader monad. The Reader monad is a pattern used for dependency injection. It represents a computation that depends on some configuration or environment of type R.
- Key Concept: A
Reader<R, A>represents a functionR -> A. It allows you to "read" from a configurationRto produce a valueA, without explicitly passing the configuration object everywhere. - Demonstrates:
- Creating
Readercomputations that access parts of a configuration object. - Using
flatMapto chain computations where one step depends on the result of a previous step and the shared configuration. - Running the final
Readercomputation by providing a concrete configuration object.
- Creating
// A Reader that depends on the AppConfig environment
Kind<ReaderKind.Witness<AppConfig>, String> connectionStringReader =
readerMonad.flatMap(
dbUrl -> READER.reader(config -> dbUrl + "?apiKey=" + config.apiKey()),
getDbUrl // Another Reader that gets the DB URL
);
// The computation is only run when a config is provided
String connectionString = READER.runReader(connectionStringReader, productionConfig);
StateExample, BankAccountWorkflow.java
These examples demonstrate the State monad. The State monad is used to manage state in a purely functional way, abstracting away the boilerplate of passing state from one function to the next.
- Key Concept: A
State<S, A>represents a functionS -> (S, A), which takes an initial state and returns a new state and a computed value. The monad chains these functions together. - Demonstrates:
- Creating stateful actions like
push,pop,deposit, andwithdraw. - Using
State.modifyto update the state andState.inspectto read from it. - Composing these actions into a larger workflow using a
Forcomprehension. - Running the final computation with an initial state to get the final state and result.
- Creating stateful actions like
// A stateful action to withdraw money, returning a boolean success flag
public static Function<BigDecimal, Kind<StateKind.Witness<AccountState>, Boolean>> withdraw(String description) {
return amount -> STATE.widen(
State.of(currentState -> {
if (currentState.balance().compareTo(amount) >= 0) {
// ... update state and return success
return new StateTuple<>(true, updatedState);
} else {
// ... update state with rejection and return failure
return new StateTuple<>(false, updatedState);
}
})
);
}
TryExample.java
This example introduces the Try monad. It's designed to encapsulate computations that can throw exceptions, making error handling more explicit and functional.
- Key Concept: A
Tryrepresents a computation that results in either aSuccesscontaining a value or aFailurecontaining an exception. - Demonstrates:
- Creating
Tryinstances for successful and failed computations. - Using
mapandflatMapto chain operations, where exceptions are caught and wrapped in aFailure. - Using
recoverandrecoverWithto handle failures and provide alternative values or computations.
- Creating
// A function that returns a Try, succeeding or failing based on the input
Function<Integer, Try<Double>> safeDivide =
value ->
(value == 0)
? Try.failure(new ArithmeticException("Div by zero"))
: Try.success(10.0 / value);
// flatMap chains the operation, propagating failure
Try<Double> result = input.flatMap(safeDivide);
ValidatedMonadExample.java
This example showcases the Validated applicative functor. While it has a Monad instance, it's often used as an Applicative to accumulate errors. This example, however, focuses on its monadic (fail-fast) behaviour.
- Key Concept:
Validatedis used for validation scenarios where you want either to get a valid result or to accumulate validation errors. - Demonstrates:
- Creating
ValidandInvalidinstances. - Using
flatMapto chain validation steps, where the firstInvalidresult short-circuits the computation. - Using
handleErrorWithto recover from a validation failure.
- Creating
// A validation function that returns a Kind-wrapped Validated
Function<String, Kind<ValidatedKind.Witness<List<String>>, Integer>> parseToIntKind =
s -> {
try {
return validatedMonad.of(Integer.parseInt(s)); // Lifts to Valid
} catch (NumberFormatException e) {
return validatedMonad.raiseError(Collections.singletonList("'" + s + "' is not a number."));
}
};
WriterExample.java
This example introduces the Writer monad. The Writer monad is used for computations that need to produce a log or accumulate a secondary value alongside their primary result.
- Key Concept: A
Writer<W, A>represents a computation that returns a primary resultAand an accumulated valueW(like a log), whereWmust have aMonoidinstance to define how values are combined. - Demonstrates:
- Using
tellto append to the log. - Using
flatMapto sequence computations, where both the results and logs are combined automatically. - Running the final
Writerto extract both the final value and the fully accumulated log.
- Using
// An action that performs a calculation and logs what it did
Function<Integer, Kind<WriterKind.Witness<String>, Integer>> addAndLog =
x -> {
int result = x + 10;
String logMsg = "Added 10 to " + x + " -> " + result + "; ";
return WRITER.widen(new Writer<>(logMsg, result));
};
// The monad combines the logs from each step automatically
Kind<WriterKind.Witness<String>, String> finalComputation = writerMonad.flatMap(
intermediateValue -> multiplyAndLogToString.apply(intermediateValue),
addAndLog.apply(5)
);
GenericExample.java
This example showcases how to write generic functions that can operate on any Functor (or Monad) by accepting the type class instance as a parameter. This is a core concept of higher-kinded polymorphism.
- Key Concept: By abstracting over the computational context (
F), you can write code that works forList,Optional,IO, or any other type that has aFunctorinstance. - Demonstrates:
- Writing a generic
mapWithFunctorfunction that takes aFunctor<F>instance and aKind<F, A>. - Calling this generic function with different monad instances (
ListMonad,OptionalMonad) and their correspondingKind-wrapped types.
- Writing a generic
// A generic function that works for any Functor F
public static <F, A, B> Kind<F, B> mapWithFunctor(
Functor<F> functorInstance, // The type class instance
Function<A, B> fn,
Kind<F, A> kindBox) { // The value in its context
return functorInstance.map(fn, kindBox);
}
// Calling it with a List
Kind<ListKind.Witness, Integer> doubledList = mapWithFunctor(listMonad, doubleFn, listKind);
// Calling it with an Optional
Kind<OptionalKind.Witness, Integer> doubledOpt = mapWithFunctor(optionalMonad, doubleFn, optKind);
ProfunctorExample.java
This example demonstrates the Profunctor type class using FunctionProfunctor, showing how to build flexible, adaptable data transformation pipelines.
- Key Concept: A
Profunctoris contravariant in its first parameter and covariant in its second, making it perfect for adapting both the input and output of functions. - Demonstrates:
- Using
lmapto adapt function inputs (contravariant mapping) - Using
rmapto adapt function outputs (covariant mapping) - Using
dimapto adapt both input and output simultaneously - Building real-world API adapters and validation pipelines
- Creating reusable transformation chains
- Using
// Original function: String length calculator
Function<String, Integer> stringLength = String::length;
// Adapt the input: now works with integers!
Kind2<FunctionKind.Witness, Integer, Integer> intToLength =
profunctor.lmap(Object::toString, lengthFunction);
// Adapt the output: now returns formatted strings!
Kind2<FunctionKind.Witness, String, String> lengthToString =
profunctor.rmap(len -> "Length: " + len, lengthFunction);
// Adapt both input and output in one operation
Kind2<FunctionKind.Witness, Integer, String> fullTransform =
profunctor.dimap(Object::toString, len -> "Result: " + len, lengthFunction);
Monad Transformers
These examples show how to use monad transformers (EitherT, MaybeT, OptionalT, ReaderT, StateT) to combine the capabilities of different monads.
EitherTExample.java
- Key Concept:
EitherTstacks theEithermonad on top of another monadF, creating a new monadEitherT<F, L, R>that handles both the effects ofFand the failure logic ofEither. - Scenario: Composing synchronous validation (
Either) with an asynchronous operation (CompletableFuture) in a single, clean workflow.
MaybeTExample.java
- Key Concept:
MaybeTstacks theMaybemonad on top of another monadF. This is useful for asynchronous operations that may not return a value. - Scenario: Fetching a userLogin and their preferences from a database asynchronously, where each step might not find a result.
OptionalTExample.java
- Key Concept:
OptionalTstacksOptionalon top of another monadF, creatingOptionalT<F, A>to handle asynchronous operations that may return an empty result. - Scenario: Fetching a userLogin and their preferences from a database asynchronously, where each step might not find a result.
ReaderTExample.java, ReaderTUnitExample.java, ReaderTAsyncUnitExample.java
- Key Concept:
ReaderTcombines theReadermonad (for dependency injection) with an outer monadF. This allows for computations that both read from a shared environment and have effects of typeF. - Scenario: An asynchronous workflow that depends on a configuration object (
AppConfig) to fetch and process data.
StateTExample.java, StateTStackExample
- Key Concept:
StateTcombines theStatemonad with an outer monadF. This is for stateful computations that also involve effects fromF. - Scenario: A stateful stack that can fail (using
Optionalas the outer monad), where popping from an empty stack results inOptional.empty().
For more advanced patterns combining State with other monads, see the Order Processing Example which demonstrates StateT with EitherT.
Quick Reference Guide
This section provides at-a-glance summaries of all type classes in Higher-Kinded-J. Use this as a quick lookup while coding or to compare different type classes.
Core Type Classes
Functor
Core Method: map(Function<A,B> f, Kind<F,A> fa) -> Kind<F,B>
Purpose: Transform values inside a context without changing the context structure
Use When:
- You have a simple transformation function
A -> B - The context/container should remain unchanged
- No dependency between input and output contexts
Laws:
- Identity:
map(identity) == identity - Composition:
map(g ∘ f) == map(g) ∘ map(f)
Common Instances: List, Optional, Maybe, Either, IO, CompletableFuture
Example:
// Transform string to length, preserving Optional context
Kind<OptionalKind.Witness, Integer> lengths =
optionalFunctor.map(String::length, optionalString);
Think Of It As: Applying a function "inside the box" without opening it
Applicative
Core Methods:
of(A value) -> Kind<F,A>(lift pure value)ap(Kind<F,Function<A,B>> ff, Kind<F,A> fa) -> Kind<F,B>(apply wrapped function)
Purpose: Combine independent computations within a context
Use When:
- You need to combine multiple wrapped values
- Operations are independent (don't depend on each other's results)
- You want to accumulate errors from multiple validations
Key Insight: map2, map3, etc. are built on ap for combining 2, 3, or more values
Laws: Identity, Composition, Homomorphism, Interchange
Common Patterns:
- Form validation (collect all errors)
- Combining configuration values
- Parallel computations
Example:
// Combine two independent validations
Kind<ValidatedKind.Witness<List<String>>, User> userLogin =
applicative.map2(
validateUsername(input.username()),
validatePassword(input.password()),
User::new
);
Think Of It As: Combining multiple "boxes" when contents are independent
Monad
Core Method: flatMap(Function<A,Kind<F,B>> f, Kind<F,A> fa) -> Kind<F,B>
Purpose: Sequence dependent computations within a context
Use When:
- Each step depends on the result of the previous step
- You need to chain operations that return wrapped values
- You want short-circuiting behaviour on failure
Key Difference from Applicative: Operations are sequential and dependent
Laws:
- Left Identity:
flatMap(f, of(a)) == f(a) - Right Identity:
flatMap(of, m) == m - Associativity:
flatMap(g, flatMap(f, m)) == flatMap(x -> flatMap(g, f(x)), m)
Utility Methods:
as(B value, Kind<F,A> fa)- replace value, keep effectpeek(Consumer<A> action, Kind<F,A> fa)- side effect without changing valueflatMap2/3/4/5(...)- combine multiple monadic values where the combining function itself returns a monadic value (useful for dependent validations or operations)
Example:
// Chain database operations where each depends on the previous
Kind<OptionalKind.Witness, Account> account =
monad.flatMap(userLogin ->
monad.flatMap(profile ->
findAccount(profile.accountId()),
findProfile(userLogin.id())),
findUser(userId));
// Combine multiple monadic values with effectful result
Kind<OptionalKind.Witness, Order> order =
monad.flatMap2(
findUser(userId),
findProduct(productId),
(user, product) -> validateAndCreateOrder(user, product) // Returns Optional
);
Think Of It As: Chaining operations where each "opens the box" and "puts result in new box"
MonadError
Core Methods:
raiseError(E error) -> Kind<F,A>(create error state)handleErrorWith(Kind<F,A> fa, Function<E,Kind<F,A>> handler) -> Kind<F,A>(recover from error)
Purpose: Add explicit error handling to monadic computations
Use When:
- You need to handle specific error types
- You want to recover from failures in a workflow
- You need to distinguish between different kinds of failures
Key Insight: Error type E is fixed for each MonadError instance
Common Error Types:
Throwablefor CompletableFutureUnitfor Optional/Maybe (absence as error)- Custom domain error types for Either/Validated
Recovery Methods:
handleError(fa, Function<E,A> handler)- recover to pure valuerecover(fa, A defaultValue)- provide default value
Example:
// Handle division by zero gracefully
Kind<EitherKind.Witness<String>, Double> result =
monadError.handleErrorWith(
divideOperation,
error -> monadError.of(0.0) // recover with default
);
Think Of It As: try-catch for functional programming
Selective
Core Methods:
select(Kind<F,Choice<A,B>> fab, Kind<F,Function<A,B>> ff) -> Kind<F,B>(conditional function application)whenS(Kind<F,Boolean> cond, Kind<F,Unit> effect) -> Kind<F,Unit>(conditional effect)ifS(Kind<F,Boolean> cond, Kind<F,A> then, Kind<F,A> else) -> Kind<F,A>(if-then-else)
Purpose: Execute effects conditionally with static structure (all branches known upfront)
Use When:
- You need conditional effects but want static analysis
- All possible branches should be visible at construction time (enabling static analysis)
- You want more power than Applicative but less than Monad
- Building feature flags, conditional logging, or validation with alternatives
Key Insight: Sits between Applicative and Monad - provides conditional effects without full dynamic choice
Common Patterns:
- Feature flag activation
- Debug/production mode switching
- Multi-source configuration fallback
- Conditional validation
Example:
// Only log if debug flag is enabled
Selective<IOKind.Witness> selective = IOSelective.INSTANCE;
Kind<IOKind.Witness, Boolean> debugEnabled =
IO_KIND.widen(IO.delay(() -> config.isDebug()));
Kind<IOKind.Witness, Unit> logEffect =
IO_KIND.widen(IO.fromRunnable(() -> log.debug("Debug info")));
Kind<IOKind.Witness, Unit> conditionalLog = selective.whenS(debugEnabled, logEffect);
// Log effect only executes if debugEnabled is true
Think Of It As: If-then-else for functional programming with compile-time visible branches
Data Combination Type Classes
Semigroup
Core Method: combine(A a1, A a2) -> A
Purpose: Types that can be combined associatively
Key Property: Associativity - combine(a, combine(b, c)) == combine(combine(a, b), c)
Use When:
- You need to combine/merge two values of the same type
- Order of combination doesn't matter (due to associativity)
- Building blocks for parallel processing
Common Instances:
- String concatenation:
"a" + "b" + "c" - Integer addition:
1 + 2 + 3 - List concatenation:
[1,2] + [3,4] + [5,6] - Set union:
{1,2} ∪ {2,3} ∪ {3,4}
Example:
// Combine error messages
Semigroup<String> stringConcat = Semigroups.string("; ");
String combined = stringConcat.combine("Error 1", "Error 2");
// Result: "Error 1; Error 2"
Think Of It As: The + operator generalised to any type
Monoid
Core Methods:
combine(A a1, A a2) -> A(from Semigroup)empty() -> A(identity element)
Purpose: Semigroups with an identity/neutral element
Key Properties:
- Associativity (from Semigroup)
- Identity:
combine(a, empty()) == combine(empty(), a) == a
Use When:
- You need a starting value for reductions/folds
- Implementing fold operations over data structures
- You might be combining zero elements
Common Instances:
- String: empty =
"", combine = concatenation - Integer addition: empty =
0, combine =+ - Integer multiplication: empty =
1, combine =* - List: empty =
[], combine = concatenation - Boolean AND: empty =
true, combine =&&
Example:
// Sum a list using integer addition monoid
Integer sum = listFoldable.foldMap(
Monoids.integerAddition(),
Function.identity(),
numbersList
);
Think Of It As: Semigroup + a "starting point" for combinations
Structure-Iterating Type Classes
Foldable
Core Method: foldMap(Monoid<M> monoid, Function<A,M> f, Kind<F,A> fa) -> M
Purpose: Reduce a data structure to a single summary value
Use When:
- You want to aggregate/summarise data in a structure
- You need different types of reductions (sum, concat, any/all, etc.)
- You want to count, find totals, or collapse collections
Key Insight: Different Monoids give different aggregations from same data
Common Operations:
- Sum numbers: use integer addition monoid
- Concatenate strings: use string monoid
- Check all conditions: use boolean AND monoid
- Count elements: map to 1, use integer addition monoid
Example:
// Multiple aggregations of the same list
List<Integer> numbers = List.of(1, 2, 3, 4, 5);
// Sum
Integer sum = foldable.foldMap(Monoids.integerAddition(),
Function.identity(), numbers); // 15
// Concatenate as strings
String concat = foldable.foldMap(Monoids.string(),
String::valueOf, numbers); // "12345"
// Check all positive
Boolean allPositive = foldable.foldMap(Monoids.booleanAnd(),
n -> n > 0, numbers); // true
Think Of It As: Swiss Army knife for data aggregation
Traverse
Core Method: traverse(Applicative<G> app, Function<A,Kind<G,B>> f, Kind<F,A> fa) -> Kind<G,Kind<F,B>>
Purpose: Apply an effectful function to each element and "flip" the contexts
Use When:
- You have a collection and want to apply an effect to each element
- You want to validate every item and collect all errors
- You need to "turn inside-out":
F<G<A>>becomesG<F<A>>
Key Operations:
traverse: apply function then flipsequence: just flip contexts (when you already haveF<G<A>>)
Common Patterns:
- Validate every item in a list
- Make async calls for each element
- Parse/process each item, collecting all failures
Example:
// Validate every string in a list, collect all errors
List<String> inputs = List.of("123", "abc", "456");
Kind<ValidatedKind.Witness<List<String>>, Kind<ListKind.Witness, Integer>> result =
listTraverse.traverse(
validatedApplicative,
this::parseInteger, // String -> Validated<List<String>, Integer>
LIST.widen(inputs)
);
// Result: either Valid(List[123, 456]) or Invalid(["abc is not a number"])
Think Of It As: Applying effects to collections while flipping the "nesting order"
Dual-Parameter Type Classes
Profunctor
Core Methods:
lmap(Function<C,A> f, Kind2<P,A,B> pab) -> Kind2<P,C,B>(contravariant on input)rmap(Function<B,D> g, Kind2<P,A,B> pab) -> Kind2<P,A,D>(covariant on output)dimap(Function<C,A> f, Function<B,D> g, Kind2<P,A,B> pab) -> Kind2<P,C,D>(both)
Purpose: Adapt inputs and outputs of two-parameter types (especially functions)
Use When:
- Building flexible data transformation pipelines
- Creating API adapters that convert between different formats
- You need to preprocess inputs or postprocess outputs
- Building reusable validation or transformation logic
Key Insight:
lmap= preprocess the input (contravariant)rmap= postprocess the output (covariant)dimap= do both transformations
Common Instance: Function<A,B> is the canonical Profunctor
Example:
// Adapt a string length function to work with integers and return formatted strings
Function<String, Integer> stringLength = String::length;
// Input adaptation: Integer -> String
Kind2<FunctionKind.Witness, Integer, Integer> intToLength =
profunctor.lmap(Object::toString, FUNCTION.widen(stringLength));
// Output adaptation: Integer -> String
Kind2<FunctionKind.Witness, String, String> lengthToString =
profunctor.rmap(len -> "Length: " + len, FUNCTION.widen(stringLength));
// Both adaptations
Kind2<FunctionKind.Witness, Integer, String> fullAdaptation =
profunctor.dimap(Object::toString, len -> "Result: " + len,
FUNCTION.widen(stringLength));
Think Of It As: The adapter pattern for functional programming
Decision Guide
Start Simple, Go Complex:
- Functor - Simple transformations, context unchanged
- Applicative - Combine independent computations
- Selective OR Monad - Choose based on needs:
- Selective: Conditional effects with all branches visible upfront (static analysis)
- Monad: Chain dependent computations with dynamic choice
- MonadError - Add error handling to Monad
- Traverse - Apply effects to collections
- Profunctor - Adapt inputs/outputs of functions
Decision Tree:
- Need to transform values? → Functor
- Need to combine independent operations? → Applicative
- Need conditional effects with static structure? → Selective
- Need sequential dependent operations? → Monad (chain dependent computations with dynamic choices based on previous results)
- Need error recovery? → MonadError
- Need to process collections with effects? → Traverse
- Need to adapt function interfaces? → Profunctor
- Need to aggregate/summarise data? → Foldable
- Need to combine values? → Semigroup/Monoid
Common Patterns:
- Form validation: Applicative (independent fields) or Traverse (list of fields)
- Database operations: Monad (dependent queries) + MonadError (failure handling)
- API integration: Profunctor (adapt formats) + Monad (chain calls)
- Configuration: Applicative (combine settings) + Reader (dependency injection)
- Conditional effects: Selective (feature flags, debug mode) or Monad (dynamic choice)
- Configuration fallback: Selective (try multiple sources with static branches)
- Logging: Writer (accumulate logs) + Monad (sequence operations)
- State management: State/StateT (thread state) + Monad (sequence updates)
Type Hierarchy
Functor
↑
Applicative ← Apply
↗ ↖
Selective Monad
↑
MonadError
Semigroup
↑
Monoid
Functor + Foldable
↑
Traverse
(Two-parameter types)
Profunctor
Bifunctor
Inheritance Meaning:
- Every Applicative is also a Functor
- Every Selective is also an Applicative (and therefore a Functor)
- Every Monad is also an Applicative (and therefore a Functor)
- Every MonadError is also a Monad (and therefore Applicative and Functor)
- Selective and Monad are siblings - both extend Applicative directly
- Every Monoid is also a Semigroup
- Every Traverse provides both Functor and Foldable capabilities
Practical Implication: If you have a Monad<F> instance, you can also use it as an Applicative<F> or Functor<F>. Selective and Monad are alternative extensions of Applicative with different trade-offs.
Common Monoid Instances
Numeric:
Monoids.integerAddition()- sum integers (empty = 0)Monoids.integerMultiplication()- multiply integers (empty = 1)
Text:
Monoids.string()- concatenate strings (empty = "")Monoids.string(delimiter)- join with delimiter
Boolean:
Monoids.booleanAnd()- logical AND (empty = true)Monoids.booleanOr()- logical OR (empty = false)
Collections:
Monoids.list()- concatenate lists (empty = [])
Custom:
// Create your own monoid
Monoid<MyType> myMonoid = new Monoid<MyType>() {
public MyType empty() { return MyType.defaultValue(); }
public MyType combine(MyType a, MyType b) { return a.mergeWith(b); }
};
Performance Notes
When to Use HKT vs Direct Methods:
Use HKT When:
- Writing generic code that works with multiple container types
- Building complex workflows with multiple type classes
- You need the power of type class composition
Use Direct Methods When:
- Simple, one-off transformations
- Performance-critical hot paths
- Working with a single, known container type
Examples:
// Hot path - use direct method
Optional<String> result = optional.map(String::toUpperCase);
// Generic reusable code - use HKT
public static <F> Kind<F, String> normalise(Functor<F> functor, Kind<F, String> input) {
return functor.map(String::toUpperCase, input);
}
Memory: HKT simulation adds minimal overhead (single wrapper object per operation) CPU: Direct method calls vs type class method calls are comparable in modern JVMs
Extending Higher Kinded Type Simulation

You can add support for new Java types (type constructors) to the Higher-Kinded-J simulation framework, allowing them to be used with type classes like Functor, Monad, etc.
There are two main scenarios:
- Adapting External Types: For types you don't own (e.g., JDK classes like
java.util.Set,java.util.Map, or classes from other libraries). - Integrating Custom Library Types: For types defined within your own project or a library you control, where you can modify the type itself.
Note: Within Higher-Kinded-J, core library types like
IO,Maybe, andEitherfollow Scenario 2—they directly implement their respective Kind interfaces (IOKind,MaybeKind,EitherKind). This provides zero runtime overhead for widen/narrow operations.
The core pattern involves creating:
- An
XxxKindinterface with a nestedWitnesstype (this remains the same). - An
XxxConverterOpsinterface defining thewidenandnarrowoperations for the specific type. - An
XxxKindHelperenum that implementsXxxConverterOpsand provides a singleton instance (e.g.,SET,MY_TYPE) for accessing these operations as instance methods. - Type class instances (e.g., for
Functor,Monad).
For external types, an additional XxxHolder record is typically used internally by the helper enum to wrap the external type.
Scenario 1: Adapting an External Type (e.g., java.util.Set<A>)
Since we cannot modify java.util.Set to directly implement our Kind structure, we need a wrapper (a Holder).
Goal: Simulate java.util.Set<A> as Kind<SetKind.Witness, A> and provide Functor, Applicative, and Monad instances for it.
Note: This pattern is useful when integrating third-party libraries or JDK types that you cannot modify directly.
-
Create the
KindInterface with Witness (SetKind.java):- Define a marker interface that extends
Kind<SetKind.Witness, A>. - Inside this interface, define a
static final class Witness {}which will serve as the phantom typeFforSet.
package org.higherkindedj.hkt.set; // Example package import org.higherkindedj.hkt.Kind; import org.jspecify.annotations.NullMarked; /** * Kind interface marker for java.util.Set<A>. * The Witness type F = SetKind.Witness * The Value type A = A */ @NullMarked public interface SetKind<A> extends Kind<SetKind.Witness, A> { /** * Witness type for {@link java.util.Set} to be used with {@link Kind}. */ final class Witness { private Witness() {} } } - Define a marker interface that extends
-
Create the
ConverterOpsInterface (SetConverterOps.java):- Define an interface specifying the
widenandnarrowmethods forSet.
package org.higherkindedj.hkt.set; import java.util.Set; import org.higherkindedj.hkt.Kind; import org.higherkindedj.hkt.exception.KindUnwrapException; // If narrow throws it import org.jspecify.annotations.NonNull; import org.jspecify.annotations.Nullable; public interface SetConverterOps { <A> @NonNull Kind<SetKind.Witness, A> widen(@NonNull Set<A> set); <A> @NonNull Set<A> narrow(@Nullable Kind<SetKind.Witness, A> kind) throws KindUnwrapException; } - Define an interface specifying the
-
Create the
KindHelperEnum with an InternalHolder(SetKindHelper.java):- Define an
enum(e.g.,SetKindHelper) that implementsSetConverterOps. - Provide a singleton instance (e.g.,
SET). - Inside this helper, define a package-private
record SetHolder<A>(@NonNull Set<A> set) implements SetKind<A> {}. This record wraps the actualjava.util.Set. widenmethod: Takes the Java type (e.g.,Set<A>), performs null checks, and returns a newSetHolder<>(set)cast toKind<SetKind.Witness, A>.narrowmethod: TakesKind<SetKind.Witness, A> kind, performs null checks, verifieskind instanceof SetHolder, extracts the underlyingSet<A>, and returns it. It throwsKindUnwrapExceptionfor any structural invalidity.
package org.higherkindedj.hkt.set; import java.util.Objects; import java.util.Set; import org.higherkindedj.hkt.Kind; import org.higherkindedj.hkt.exception.KindUnwrapException; import org.jspecify.annotations.NonNull; import org.jspecify.annotations.Nullable; public enum SetKindHelper implements SetConverterOps { SET; // Singleton instance // Error messages can be static final within the enum private static final String ERR_INVALID_KIND_NULL = "Cannot narrow null Kind for Set"; private static final String ERR_INVALID_KIND_TYPE = "Kind instance is not a SetHolder: "; private static final String ERR_INVALID_KIND_TYPE_NULL = "Input Set cannot be null for widen"; // Holder Record (package-private for testability if needed) record SetHolder<AVal>(@NonNull Set<AVal> set) implements SetKind<AVal> { } @Override public <A> @NonNull Kind<SetKind.Witness, A> widen(@NonNull Set<A> set) { Objects.requireNonNull(set, ERR_INVALID_KIND_TYPE_NULL); return new SetHolder<>(set); } @Override public <A> @NonNull Set<A> narrow(@Nullable Kind<SetKind.Witness, A> kind) { if (kind == null) { throw new KindUnwrapException(ERR_INVALID_KIND_NULL); } if (kind instanceof SetHolder<?> holder) { // SetHolder's 'set' component is @NonNull, so holder.set() is guaranteed non-null. return (Set<A>) holder.set(); } else { throw new KindUnwrapException(ERR_INVALID_KIND_TYPE + kind.getClass().getName()); } } } - Define an
Scenario 2: Integrating a Custom Library Type
If you are defining a new type within your library (e.g., a custom MyType<A>), you can design it to directly participate in the HKT simulation. This approach typically doesn't require an explicit Holder record if your type can directly implement the XxxKind interface.
Examples in Higher-Kinded-J:
IO<A>,Maybe<A>(viaJust<T>andNothing<T>),Either<L,R>(viaLeftandRight),Validated<E,A>,Id<A>, and monad transformers all use this pattern. Their widen/narrow operations are simple type-safe casts with no wrapper object allocation.
-
Define Your Type and its
KindInterface:- Your custom type (e.g.,
MyType<A>) directly implements its correspondingMyTypeKind<A>interface. MyTypeKind<A>extendsKind<MyType.Witness, A>and defines the nestedWitnessclass. (This part remains unchanged).
package org.example.mytype; import org.higherkindedj.hkt.Kind; import org.jspecify.annotations.NullMarked; // 1. The Kind Interface with Witness @NullMarked public interface MyTypeKind<A> extends Kind<MyType.Witness, A> { /** Witness type for MyType. */ final class Witness { private Witness() {} } } // 2. Your Custom Type directly implements its Kind interface public record MyType<A>(A value) implements MyTypeKind<A> { // ... constructors, methods for MyType ... } - Your custom type (e.g.,
-
Create the
ConverterOpsInterface (MyTypeConverterOps.java):- Define an interface specifying the
widenandnarrowmethods forMyType.
package org.example.mytype; import org.higherkindedj.hkt.Kind; import org.higherkindedj.hkt.exception.KindUnwrapException; import org.jspecify.annotations.NonNull; import org.jspecify.annotations.Nullable; public interface MyTypeConverterOps { <A> @NonNull Kind<MyType.Witness, A> widen(@NonNull MyType<A> myTypeValue); <A> @NonNull MyType<A> narrow(@Nullable Kind<MyType.Witness, A> kind) throws KindUnwrapException; } - Define an interface specifying the
-
Create the
KindHelperEnum (MyTypeKindHelper.java):- Define an
enum(e.g.,MyTypeKindHelper) that implementsMyTypeConverterOps. - Provide a singleton instance (e.g.,
MY_TYPE). widen(MyType<A> myTypeValue): SinceMyType<A>is already aMyTypeKind<A>(and thus aKind), this method performs a null check and then a direct cast.narrow(Kind<MyType.Witness, A> kind): This method checksif (kind instanceof MyType<?> myTypeInstance)and then casts and returnsmyTypeInstance.
package org.example.mytype; import org.higherkindedj.hkt.Kind; import org.higherkindedj.hkt.exception.KindUnwrapException; import org.jspecify.annotations.NonNull; import org.jspecify.annotations.Nullable; import java.util.Objects; public enum MyTypeKindHelper implements MyTypeConverterOps { MY_TYPE; // Singleton instance private static final String ERR_INVALID_KIND_NULL = "Cannot narrow null Kind for MyType"; private static final String ERR_INVALID_KIND_TYPE = "Kind instance is not a MyType: "; @Override @SuppressWarnings("unchecked") // MyType<A> is MyTypeKind<A> is Kind<MyType.Witness, A> public <A> @NonNull Kind<MyType.Witness, A> widen(@NonNull MyType<A> myTypeValue) { Objects.requireNonNull(myTypeValue, "Input MyType cannot be null for widen"); return (MyTypeKind<A>) myTypeValue; // Direct cast } @Override @SuppressWarnings("unchecked") public <A> @NonNull MyType<A> narrow(@Nullable Kind<MyType.Witness, A> kind) { if (kind == null) { throw new KindUnwrapException(ERR_INVALID_KIND_NULL); } // Check if it's an instance of your actual type if (kind instanceof MyType<?> myTypeInstance) { // Pattern match for MyType return (MyType<A>) myTypeInstance; // Direct cast } else { throw new KindUnwrapException(ERR_INVALID_KIND_TYPE + kind.getClass().getName()); } } } - Define an
-
Implement Type Class Instances:
- These will be similar to the external type scenario (e.g.,
MyTypeMonad implements Monad<MyType.Witness>), usingMyTypeKindHelper.MY_TYPE.widen(...)andMyTypeKindHelper.MY_TYPE.narrow(...)(or with static importMY_TYPE.widen(...)).
- These will be similar to the external type scenario (e.g.,
- Immutability: Favour immutable data structures for your
Holderor custom type if possible, as this aligns well with functional programming principles. - Null Handling: Be very clear about null handling. Can the wrapped Java type be null? Can the value
Ainside be null?KindHelper'swidenmethod should typically reject a null container itself.Monad.of(null)behaviour depends on the specific monad (e.g.,OptionalMonad.OPTIONAL_MONAD.of(null)is empty viaOPTIONAL.widen(Optional.empty()),ListMonad.LIST_MONAD.of(null)might be an empty list or a list with a null element based on its definition). - Testing: Thoroughly test your
XxxKindHelperenum (especiallynarrowwith invalid inputs) and your type class instances (Functor, Applicative, Monad laws).
By following these patterns, you can integrate new or existing types into the Higher-Kinded-J framework, enabling them to be used with generic functional abstractions. The KindHelper enums, along with their corresponding ConverterOps interfaces, provide a standardised way to handle the widen and narrow conversions.
Core API Interfaces: The Building Blocks
The hkj-api module contains the heart of the higher-kinded-j library—a set of interfaces that define the core functional programming abstractions. These are the building blocks you will use to write powerful, generic, and type-safe code.
This document provides a high-level overview of the most important interfaces, which are often referred to as type classes.
Core HKT Abstraction
At the very centre of the library is the Kind interface, which makes higher-kinded types possible in Java.
Kind<F, A>: This is the foundational interface that emulates a higher-kinded type. It represents a typeFthat is generic over a typeA. For example,Kind<ListKind.Witness, String>represents aList<String>. You will see this interface used everywhere as the common currency for all our functional abstractions.
The Monad Hierarchy
The most commonly used type classes form a hierarchy of power and functionality, starting with Functor and building up to Monad.
Functor<F>
A Functor is a type class for any data structure that can be "mapped over". It provides a single operation, map, which applies a function to the value(s) inside the structure without changing the structure itself.
- Key Method:
map(Function<A, B> f, Kind<F, A> fa) - Intuition: If you have a
List<A>and a functionA -> B, aFunctorforListlets you produce aList<B>. The same logic applies toOptional,Either,Try, etc.
Applicative<F>
An Applicative (or Applicative Functor) is a Functor with more power. It allows you to apply a function that is itself wrapped in the data structure. This is essential for combining multiple independent computations.
- Key Methods:
of(A value): Lifts a normal valueAinto the applicative contextF<A>.ap(Kind<F, Function<A, B>> ff, Kind<F, A> fa): Applies a wrapped function to a wrapped value.
- Intuition: If you have an
Optional<Function<A, B>>and anOptional<A>, you can use theApplicativeforOptionalto get anOptional<B>. This is howValidatedis able to accumulate errors from multiple independent validation steps.
Monad<F>
A Monad is an Applicative that adds the power of sequencing dependent computations. It provides a way to chain operations together, where the result of one operation is fed into the next.
- Key Method:
flatMap(Function<A, Kind<F, B>> f, Kind<F, A> fa) - Intuition:
flatMapis the powerhouse of monadic composition. It takes a value from a context (like anOptional<A>), applies a function that returns a new context (A -> Optional<B>), and flattens the result into a single context (Optional<B>). This is what enables the elegant, chainable workflows you see in the examples.
MonadError<F, E>
A MonadError is a specialised Monad that has a defined error type E. It provides explicit methods for raising and handling errors within a monadic workflow.
- Key Methods:
raiseError(E error): Lifts an errorEinto the monadic contextF<A>.handleErrorWith(Kind<F, A> fa, Function<E, Kind<F, A>> f): Provides a way to recover from a failed computation.
Alternative<F>
An Alternative is an Applicative that adds the concept of choice and failure. It provides operations for combining alternatives and representing empty/failed computations. Alternative sits at the same level as Applicative in the type class hierarchy.
- Key Methods:
empty(): Returns the empty/failure element for the applicative.orElse(Kind<F, A> fa, Supplier<Kind<F, A>> fb): Combines two alternatives, preferring the first if it succeeds, otherwise evaluating and returning the second.guard(boolean condition): Returns success (of(Unit.INSTANCE)) if true, otherwise empty.
- Use Case: Essential for parser combinators, fallback chains, non-deterministic computation, and trying multiple alternatives with lazy evaluation.
Selective<F>
A Selective functor sits between Applicative and Monad in terms of power. It extends Applicative with the ability to conditionally apply effects based on the result of a previous computation, whilst maintaining a static structure where all possible branches are visible upfront.
- Key Methods:
select(Kind<F, Choice<A, B>> fab, Kind<F, Function<A, B>> ff): Core operation that conditionally applies a function based on aChoice.whenS(Kind<F, Boolean> fcond, Kind<F, Unit> fa): Conditionally executes an effect based on a boolean condition.ifS(Kind<F, Boolean> fcond, Kind<F, A> fthen, Kind<F, A> felse): Provides if-then-else semantics with both branches visible upfront.
- Use Case: Perfect for feature flags, conditional logging, configuration-based behaviour, and any scenario where you need conditional effects with static analysis capabilities.
MonadZero<F>
A MonadZero is a Monad that also extends Alternative, combining monadic bind with choice operations. It adds the concept of a "zero" or "empty" element, allowing it to represent failure or absence.
- Key Methods:
zero(): Returns the zero/empty element for the monad (implementsempty()from Alternative).- Inherits
orElse()andguard()fromAlternative.
- Use Case: Primarily enables filtering in for-comprehensions via the
when()clause. Also provides all Alternative operations for monadic contexts. Implemented by List, Maybe, Optional, and Stream.
Data Aggregation Type Classes
These type classes define how data can be combined and reduced.
Semigroup<A>
A Semigroup is a simple type class for any type A that has an associative combine operation. It's the foundation for any kind of data aggregation.
- Key Method:
combine(A a1, A a2) - Use Case: Its primary use in this library is to tell a
Validated``Applicativehow to accumulate errors.
Monoid<A>
A Monoid is a Semigroup that also has an "empty" or "identity" element. This is a value that, when combined with any other value, does nothing.
- Key Methods:
combine(A a1, A a2)(fromSemigroup)empty()
- Use Case: Essential for folding data structures, where
empty()provides the starting value for the reduction.
Structure-Iterating Type Classes
These type classes define how to iterate over and manipulate the contents of a data structure in a generic way.
Foldable<F>
A Foldable is a type class for any data structure F that can be reduced to a single summary value. It uses a Monoid to combine the elements.
- Key Method:
foldMap(Monoid<M> monoid, Function<A, M> f, Kind<F, A> fa) - Intuition: It abstracts the process of iterating over a collection and aggregating the results.
Traverse<F>
A Traverse is a powerful type class that extends both Functor and Foldable. It allows you to iterate over a data structure F<A> and apply an effectful function A -> G<B> at each step, collecting the results into a single effect G<F<B>>.
- Key Method:
traverse(Applicative<G> applicative, Function<A, Kind<G, B>> f, Kind<F, A> fa) - Use Case: This is incredibly useful for tasks like validating every item in a
List, where the validation returns aValidated. The result is a singleValidatedcontaining either aListof all successful results or an accumulation of all errors.
Dual-Parameter Type Classes
These type classes work with types that take two type parameters, such as functions, profunctors, and bifunctors.
Profunctor<P>
A Profunctor is a type class for any type constructor P<A, B> that is contravariant in its first parameter and covariant in its second. This is the abstraction behind functions and many data transformation patterns.
New to variance terminology? See the Glossary for detailed explanations of covariant, contravariant, and invariant with Java-focused examples.
- Key Methods:
lmap(Function<C, A> f, Kind2<P, A, B> pab): Pre-process the input (contravariant mapping)rmap(Function<B, C> g, Kind2<P, A, B> pab): Post-process the output (covariant mapping)dimap(Function<C, A> f, Function<B, D> g, Kind2<P, A, B> pab): Transform both input and output simultaneously
- Use Case: Essential for building flexible data transformation pipelines, API adapters, and validation frameworks that can adapt to different input and output formats without changing core business logic.
Profunctors in Optics
Importantly, every optic in higher-kinded-j is fundamentally a profunctor. This means that Lens, Prism, Iso, and Traversal all support profunctor operations through their contramap, map, and dimap methods. This provides incredible flexibility for adapting optics to work with different data types and structures, making them highly reusable across different contexts and API boundaries.
Bifunctor<F>
A Bifunctor is a type class for any type constructor F<A, B> that is covariant in both its type parameters. Unlike Profunctor, which is contravariant in the first parameter, Bifunctor allows you to map over both sides independently or simultaneously.
New to variance terminology? See the Glossary for detailed explanations of covariant, contravariant, and invariant with Java-focused examples.
- Key Methods:
bimap(Function<A, C> f, Function<B, D> g, Kind2<F, A, B> fab): Transform both type parameters simultaneouslyfirst(Function<A, C> f, Kind2<F, A, B> fab): Map over only the first type parametersecond(Function<B, D> g, Kind2<F, A, B> fab): Map over only the second type parameter
- Use Case: Essential for transforming both channels of sum types (like
Either<L, R>orValidated<E, A>) or product types (likeTuple2<A, B>orWriter<W, A>), where both parameters hold data rather than representing input/output relationships. Perfect for API response transformation, validation pipelines, data migration, and error handling scenarios.
Functor: The "Mappable" Type Class
- How to transform values inside containers without changing the container structure
- The difference between regular functions and functorial mapping
- Functor laws (identity and composition) and why they matter
- How to use Functor instances with List, Optional, and other containers
- When to choose Functor over direct method calls
At the heart of functional programming is the ability to transform data within a container without having to open it. The Functor type class provides exactly this capability. It's the simplest and most common abstraction for any data structure that can be "mapped over."
If you've ever used Optional.map() or Stream.map(), you've already been using the Functor pattern! higher-kinded-j simply formalises this concept so you can apply it to any data structure.
What is it?
A Functor is a type class for any data structure F that supports a map operation. This operation takes a function from A -> B and applies it to the value(s) inside a container F<A>, producing a new container F<B> of the same shape.
Think of a Functor as a generic "box" that holds a value. The map function lets you transform the contents of the box without ever taking the value out. Whether the box is an Optional that might be empty, a List with many items, or a Try that might hold an error, the mapping logic remains the same.
The interface for Functor in hkj-api is simple and elegant:
public interface Functor<F> {
<A, B> @NonNull Kind<F, B> map(final Function<? super A, ? extends B> f, final Kind<F, A> fa);
}
f: The function to apply to the value inside the Functor.fa: The higher-kindedFunctorinstance (e.g., aKind<Optional.Witness, String>).
The Functor Laws
For a Functor implementation to be lawful, it must obey two simple rules. These ensure that the map operation is predictable and doesn't have unexpected side effects.
-
Identity Law: Mapping with the identity function (
x -> x) should change nothing.functor.map(x -> x, fa); // This must be equivalent to fa -
Composition Law: Mapping with two functions composed together is the same as mapping with each function one after the other.
Function<A, B> f = ...; Function<B, C> g = ...; // This... functor.map(g.compose(f), fa); // ...must be equivalent to this: functor.map(g, functor.map(f, fa));
These laws ensure that map is only about transformation and preserves the structure of the data type.
Why is it useful?
Functor allows you to write generic, reusable code that transforms values inside any "mappable" data structure. This is the first step toward abstracting away the boilerplate of dealing with different container types.
Example: Mapping over an Optional and a List
Let's see how we can use the Functor instances for Optional and List to apply the same logic to different data structures.
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.list.ListFunctor;
import org.higherkindedj.hkt.list.ListKind;
import org.higherkindedj.hkt.optional.OptionalFunctor;
import org.higherkindedj.hkt.optional.OptionalKind;
import java.util.List;
import java.util.Optional;
import static org.higherkindedj.hkt.list.ListKindHelper.LIST;
import static org.higherkindedj.hkt.optional.OptionalKindHelper.OPTIONAL;
// Our function that we want to apply
Function<String, Integer> stringLength = String::length;
// --- Scenario 1: Mapping over an Optional ---
Functor<OptionalKind.Witness> optionalFunctor = OptionalFunctor.INSTANCE;
// The data
Kind<OptionalKind.Witness, String> optionalWithValue = OPTIONAL.widen(Optional.of("Hello"));
Kind<OptionalKind.Witness, String> optionalEmpty = OPTIONAL.widen(Optional.empty());
// Apply the map
Kind<OptionalKind.Witness, Integer> lengthWithValue = optionalFunctor.map(stringLength, optionalWithValue);
Kind<OptionalKind.Witness, Integer> lengthEmpty = optionalFunctor.map(stringLength, optionalEmpty);
// Result: Optional[5]
System.out.println(OPTIONAL.narrow(lengthWithValue));
// Result: Optional.empty
System.out.println(OPTIONAL.narrow(lengthEmpty));
// --- Scenario 2: Mapping over a List ---
Functor<ListKind.Witness> listFunctor = ListFunctor.INSTANCE;
// The data
Kind<ListKind.Witness, String> listOfStrings = LIST.widen(List.of("one", "two", "three"));
// Apply the map
Kind<ListKind.Witness, Integer> listOfLengths = listFunctor.map(stringLength, listOfStrings);
// Result: [3, 3, 5]
System.out.println(LIST.narrow(listOfLengths));
As you can see, the Functor provides a consistent API for transformation, regardless of the underlying data structure. This is the first and most essential step on the path to more powerful abstractions like Applicative and Monad.
Applicative: Applying Wrapped Functions
- How to apply wrapped functions to wrapped values using
ap - The difference between independent computations (Applicative) and dependent ones (Monad)
- How to combine multiple validation results and accumulate all errors
- Using
map2,map3and other convenience methods for combining values - Real-world validation scenarios with the Validated type
Whilst a Functor excels at applying a pure function to a value inside a context, what happens when the function you want to apply is also wrapped in a context? This is where the Applicative type class comes in. It's the next step up in power from a Functor and allows you to combine multiple computations within a context in a very powerful way.
What is it?
An Applicative (or Applicative Functor) is a Functor that also provides two key operations:
of(also known aspure): Lifts a regular value into the applicative context. For example, it can take aStringand wrap it to become anOptional<String>.ap: Takes a function that is wrapped in the context (e.g., anOptional<Function<A, B>>) and applies it to a value that is also in the context (e.g., anOptional<A>).
This ability to apply a wrapped function to a wrapped value is what makes Applicative so powerful. It's the foundation for combining independent computations.
The interface for Applicative in hkj-api extends Functor:
@NullMarked
public interface Applicative<F> extends Functor<F> {
<A> @NonNull Kind<F, A> of(@Nullable A value);
<A, B> @NonNull Kind<F, B> ap(
Kind<F, ? extends Function<A, B>> ff,
Kind<F, A> fa
);
// Default methods for map2, map3, etc. are also provided
default <A, B, C> @NonNull Kind<F, C> map2(
final Kind<F, A> fa,
final Kind<F, B> fb,
final BiFunction<? super A, ? super B, ? extends C> f) {
return ap(map(a -> b -> f.apply(a, b), fa), fb);
}
}
Why is it useful?
The primary use case for Applicative is to combine the results of several independent computations that are all inside the same context. The classic example is data validation, where you want to validate multiple fields and accumulate all the errors.
Whilst a Monad (using flatMap) can also combine computations, it cannot accumulate errors in the same way. When a monadic chain fails, it short-circuits, giving you only the first error. An Applicative, on the other hand, can process all computations independently and combine the results.
Example: Validating a User Registration Form
Imagine you have a registration form and you need to validate both the username and the password. Each validation can either succeed or return a list of error messages. We can use the Applicative for Validated to run both validations and get all the errors back at once.
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.validated.Validated;
import org.higherkindedj.hkt.validated.ValidatedMonad;
import org.higherkindedj.hkt.Semigroups;
import java.util.List;
import static org.higherkindedj.hkt.validated.ValidatedKindHelper.VALIDATED;
// A simple User class
record User(String username, String password) {}
// Validation functions
public Validated<List<String>, String> validateUsername(String username) {
if (username.length() < 3) {
return Validated.invalid(List.of("Username must be at least 3 characters"));
}
return Validated.valid(username);
}
public Validated<List<String>, String> validatePassword(String password) {
if (!password.matches(".*\\d.*")) {
return Validated.invalid(List.of("Password must contain a number"));
}
return Validated.valid(password);
}
// --- Get the Applicative instance for Validated ---
// We need a Semigroup to tell the Applicative how to combine errors (in this case, by concatenating lists)
Applicative<Validated.Witness<List<String>>> applicative =
ValidatedMonad.instance(Semigroups.list());
// --- Scenario 1: All validations pass ---
Validated<List<String>, String> validUsername = validateUsername("test_user");
Validated<List<String>, String> validPassword = validatePassword("password123");
Kind<Validated.Witness<List<String>>, User> validResult =
applicative.map2(
VALIDATED.widen(validUsername),
VALIDATED.widen(validPassword),
User::new // If both are valid, create a new User
);
// Result: Valid(User[username=test_user, password=password123])
System.out.println(VALIDATED.narrow(validResult));
// --- Scenario 2: Both validations fail ---
Validated<List<String>, String> invalidUsername = validateUsername("no");
Validated<List<String>, String> invalidPassword = validatePassword("bad");
Kind<Validated.Witness<List<String>>, User> invalidResult =
applicative.map2(
VALIDATED.widen(invalidUsername),
VALIDATED.widen(invalidPassword),
User::new
);
// The errors from both validations are accumulated!
// Result: Invalid([Username must be at least 3 characters, Password must contain a number])
System.out.println(VALIDATED.narrow(invalidResult));
This error accumulation is impossible with Functor and is one of the key features that makes Applicative so indispensable for real-world functional programming.
Alternative
The Alternative type class represents applicative functors that support choice and failure. It extends the Applicative interface with operations for combining alternatives and representing empty/failed computations. Alternative sits at the same level as Applicative in the type class hierarchy, providing a more general abstraction than MonadZero.
The interface for Alternative in hkj-api extends Applicative:
public interface Alternative<F> extends Applicative<F> {
<A> Kind<F, A> empty();
<A> Kind<F, A> orElse(Kind<F, A> fa, Supplier<Kind<F, A>> fb);
default Kind<F, Unit> guard(boolean condition);
}
Why is it useful?
An Applicative provides a way to apply functions within a context and combine multiple values. An Alternative adds two critical operations to this structure:
empty(): Returns the "empty" or "failure" element for the applicative functor.orElse(fa, fb): Combines two alternatives, preferring the first if it succeeds, otherwise evaluating and returning the second.
These operations enable:
- Choice and fallback mechanisms: Try one computation, if it fails, try another
- Non-deterministic computation: Represent multiple possible results (e.g., List concatenation)
- Parser combinators: Essential for building flexible parsers that try alternatives
- Conditional effects: Using the
guard()helper for filtering
Relationship with MonadZero
In higher-kinded-j, MonadZero extends both Monad and Alternative:
public interface MonadZero<F> extends Monad<F>, Alternative<F> {
<A> Kind<F, A> zero();
@Override
default <A> Kind<F, A> empty() {
return zero();
}
}
This means:
- Every
MonadZerois also anAlternative - The
zero()method provides the implementation forempty() - Types that are MonadZero (List, Maybe, Optional, Stream) automatically get Alternative operations
Key Implementations in this Project
For different types, Alternative has different semantics:
- Maybe:
empty()returnsNothing.orElse()returns the firstJust, or the second if the first isNothing. - Optional:
empty()returnsOptional.empty().orElse()returns the first present value, or the second if the first is empty. - List:
empty()returns an empty list[].orElse()concatenates both lists (non-deterministic choice). - Stream:
empty()returns an empty stream.orElse()concatenates both streams lazily.
Primary Uses
1. Fallback Chains with Maybe/Optional
Try multiple sources, using the first successful one:
import org.higherkindedj.hkt.Alternative;
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.maybe.MaybeKind;
import org.higherkindedj.hkt.maybe.MaybeMonad;
import org.higherkindedj.hkt.maybe.Maybe;
import static org.higherkindedj.hkt.maybe.MaybeKindHelper.MAYBE;
// Get the Alternative instance for Maybe
final Alternative<MaybeKind.Witness> alt = MaybeMonad.INSTANCE;
// Simulate trying multiple configuration sources
Kind<MaybeKind.Witness, String> fromEnv = MAYBE.nothing(); // Not found
Kind<MaybeKind.Witness, String> fromFile = MAYBE.just("config.txt"); // Found!
Kind<MaybeKind.Witness, String> fromDefault = MAYBE.just("default");
// Try sources in order
Kind<MaybeKind.Witness, String> config = alt.orElse(
fromEnv,
() -> alt.orElse(
fromFile,
() -> fromDefault
)
);
Maybe<String> result = MAYBE.narrow(config);
System.out.println("Config: " + result.get()); // "config.txt"
Using orElseAll() for cleaner syntax:
Kind<MaybeKind.Witness, String> config = alt.orElseAll(
fromEnv,
() -> fromFile,
() -> fromDefault
);
2. Non-Deterministic Computation with List
Represent all possible outcomes:
import org.higherkindedj.hkt.Alternative;
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.list.ListKind;
import org.higherkindedj.hkt.list.ListMonad;
import java.util.Arrays;
import java.util.List;
import static org.higherkindedj.hkt.list.ListKindHelper.LIST;
// Get the Alternative instance for List
final Alternative<ListKind.Witness> alt = ListMonad.INSTANCE;
// Possible actions
Kind<ListKind.Witness, String> actions1 = LIST.widen(Arrays.asList("move_left", "move_right"));
Kind<ListKind.Witness, String> actions2 = LIST.widen(Arrays.asList("jump", "duck"));
// Combine all possibilities
Kind<ListKind.Witness, String> allActions = alt.orElse(actions1, () -> actions2);
List<String> result = LIST.narrow(allActions);
System.out.println("All actions: " + result);
// Output: [move_left, move_right, jump, duck]
3. Conditional Success with guard()
Filter based on conditions:
import org.higherkindedj.hkt.Alternative;
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.Unit;
import org.higherkindedj.hkt.maybe.MaybeKind;
import org.higherkindedj.hkt.maybe.MaybeMonad;
import org.higherkindedj.hkt.maybe.Maybe;
import static org.higherkindedj.hkt.maybe.MaybeKindHelper.MAYBE;
final Alternative<MaybeKind.Witness> alt = MaybeMonad.INSTANCE;
// Check authentication
boolean isAuthenticated = true;
Kind<MaybeKind.Witness, Unit> authCheck = alt.guard(isAuthenticated);
Maybe<Unit> result = MAYBE.narrow(authCheck);
System.out.println("Authenticated: " + result.isJust()); // true
// guard(false) returns empty()
Kind<MaybeKind.Witness, Unit> failedCheck = alt.guard(false);
System.out.println("Failed: " + MAYBE.narrow(failedCheck).isNothing()); // true
4. Lazy Evaluation
The second argument to orElse() is provided via Supplier, enabling lazy evaluation:
final Alternative<MaybeKind.Witness> alt = MaybeMonad.INSTANCE;
Kind<MaybeKind.Witness, String> primary = MAYBE.just("found");
Kind<MaybeKind.Witness, String> result = alt.orElse(
primary,
() -> {
System.out.println("Computing fallback...");
return MAYBE.just("fallback");
}
);
// "Computing fallback..." is never printed because primary succeeded
System.out.println("Result: " + MAYBE.narrow(result).get()); // "found"
For Maybe and Optional, the second alternative is only evaluated if the first is empty.
For List and Stream, both alternatives are always evaluated (to concatenate them), but the Supplier still provides control over when the second collection is created.
Alternative Laws
Alternative instances must satisfy these laws:
-
Left Identity:
orElse(empty(), () -> fa) ≡ fa- empty is the left identity for orElse
-
Right Identity:
orElse(fa, () -> empty()) ≡ fa- empty is the right identity for orElse
-
Associativity:
orElse(fa, () -> orElse(fb, () -> fc)) ≡ orElse(orElse(fa, () -> fb), () -> fc)- The order of combining alternatives doesn't matter
-
Left Absorption:
ap(empty(), fa) ≡ empty()- Applying an empty function gives empty
-
Right Absorption:
ap(ff, empty()) ≡ empty()- Applying any function to empty gives empty
Practical Example: Configuration Loading
Here's a complete example showing how Alternative enables elegant fallback chains:
import org.higherkindedj.hkt.Alternative;
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.maybe.MaybeKind;
import org.higherkindedj.hkt.maybe.MaybeMonad;
import org.higherkindedj.hkt.maybe.Maybe;
import static org.higherkindedj.hkt.maybe.MaybeKindHelper.MAYBE;
public class ConfigLoader {
private final Alternative<MaybeKind.Witness> alt = MaybeMonad.INSTANCE;
public Kind<MaybeKind.Witness, String> loadConfig(String key) {
return alt.orElseAll(
readFromEnvironment(key),
() -> readFromConfigFile(key),
() -> readFromDatabase(key),
() -> getDefaultValue(key)
);
}
private Kind<MaybeKind.Witness, String> readFromEnvironment(String key) {
String value = System.getenv(key);
return value != null ? MAYBE.just(value) : MAYBE.nothing();
}
private Kind<MaybeKind.Witness, String> readFromConfigFile(String key) {
// Simulate file reading
return MAYBE.nothing(); // Not found
}
private Kind<MaybeKind.Witness, String> readFromDatabase(String key) {
// Simulate database query
return MAYBE.just("db-value-" + key);
}
private Kind<MaybeKind.Witness, String> getDefaultValue(String key) {
return MAYBE.just("default-" + key);
}
}
// Usage
ConfigLoader loader = new ConfigLoader();
Kind<MaybeKind.Witness, String> config = loader.loadConfig("APP_NAME");
Maybe<String> result = MAYBE.narrow(config);
System.out.println("Config value: " + result.get()); // "db-value-APP_NAME"
Comparison: Alternative vs MonadZero
| Aspect | Alternative | MonadZero |
|---|---|---|
| Extends | Applicative | Monad (and Alternative) |
| Power Level | Less powerful | More powerful |
| Core Methods | empty(), orElse() | zero(), inherits orElse() |
| Use Case | Choice, fallback, alternatives | Filtering, monadic zero |
| Examples | Parser combinators, fallback chains | For-comprehension filtering |
In practice, since MonadZero extends Alternative in higher-kinded-j, types like List, Maybe, Optional, and Stream have access to both sets of operations.
When to Use Alternative
Use Alternative when you need to:
- Try multiple alternatives with fallback behaviour
- Combine all possibilities (for List/Stream)
- Conditionally proceed based on boolean conditions (
guard()) - Build parser combinators or similar choice-based systems
- Work at the Applicative level without requiring full Monad power
Alternative provides a principled, composable way to handle choice and failure in functional programming.
Complete Working Example
For a complete, runnable example demonstrating Alternative with configuration loading, see:
This example demonstrates:
- Basic
orElse()fallback patterns orElseAll()for multiple fallback sourcesguard()for conditional validation- Lazy evaluation benefits
- Parser combinator patterns using Alternative
Monad: Sequencing Computations
- How to sequence computations where each step depends on previous results
- The power of
flatMapfor chaining operations that return wrapped values - When to use Monad vs Applicative (dependent vs independent computations)
- Essential utility methods:
as,peek,flatMapIfOrElse, andflatMapN - How to combine multiple monadic values with
flatMap2,flatMap3, etc. - How monadic short-circuiting works in practice
You've seen how Functor lets you map over a value in a context and how Applicative lets you combine independent computations within a context. Now, we'll introduce the most powerful of the trio: Monad.
A Monad builds on Applicative by adding one crucial ability: sequencing computations that depend on each other. If the result of the first operation is needed to determine the second operation, you need a Monad.
What is it?
A Monad is an Applicative that provides a new function called flatMap (also known as bind in some languages). This is the powerhouse of monadic composition.
While map takes a simple function A -> B, flatMap takes a function that returns a new value already wrapped in the monadic context, i.e., A -> Kind<F, B>. flatMap then intelligently flattens the nested result Kind<F, Kind<F, B>> into a simple Kind<F, B>.
This flattening behaviour is what enables you to chain operations together in a clean, readable sequence without creating deeply nested structures.
The Monad Interface
The interface for Monad in hkj-api extends Applicative and adds flatMap along with several useful default methods for common patterns.
@NullMarked
public interface Monad<M> extends Applicative<M> {
// Core sequencing method
<A, B> @NonNull Kind<M, B> flatMap(
final Function<? super A, ? extends Kind<M, B>> f, final Kind<M, A> ma);
// Type-safe conditional branching
default <A, B> @NonNull Kind<M, B> flatMapIfOrElse(
final Predicate<? super A> predicate,
final Function<? super A, ? extends Kind<M, B>> ifTrue,
final Function<? super A, ? extends Kind<M, B>> ifFalse,
final Kind<M, A> ma) {
return flatMap(a -> predicate.test(a) ? ifTrue.apply(a) : ifFalse.apply(a), ma);
}
// Replace the value while preserving the effect
default <A, B> @NonNull Kind<M, B> as(final B b, final Kind<M, A> ma) {
return map(_ -> b, ma);
}
// Perform a side-effect without changing the value
default <A> @NonNull Kind<M, A> peek(final Consumer<? super A> action, final Kind<M, A> ma) {
return map(a -> {
action.accept(a);
return a;
}, ma);
}
// Combine multiple monadic values (flatMapN methods)
default <A, B, R> @NonNull Kind<M, R> flatMap2(
Kind<M, A> ma, Kind<M, B> mb,
BiFunction<? super A, ? super B, ? extends Kind<M, R>> f) {
return flatMap(a -> flatMap(b -> f.apply(a, b), mb), ma);
}
default <A, B, C, R> @NonNull Kind<M, R> flatMap3(
Kind<M, A> ma, Kind<M, B> mb, Kind<M, C> mc,
Function3<? super A, ? super B, ? super C, ? extends Kind<M, R>> f) {
return flatMap(a -> flatMap2(mb, mc, (b, c) -> f.apply(a, b, c)), ma);
}
// flatMap4 and flatMap5 build on flatMap3 and flatMap4 respectively...
}
Monad vs. Applicative
The key difference is simple but profound:
Applicativeis for combining independent computations. The shape and structure of all the computations are known upfront. This is why it can accumulate errors from multiple validations—it runs all of them.Monadis for sequencing dependent computations. The computation in the second step cannot be known until the first step has completed. This is why it short-circuits on failure—if the first step fails, there is no value to feed into the second step.
Why is it useful?
Monad is essential for building any kind of workflow where steps depend on the result of previous steps, especially when those steps might fail or be asynchronous. It allows you to write what looks like a simple sequence of operations while hiding the complexity of error handling, null checks, or concurrency.
This pattern is the foundation for the for-comprehension builder in higher-kinded-j, which transforms a chain of flatMap calls into clean, imperative-style code.
Core Method: flatMap
This is the primary method for chaining dependent operations.
Example: A Safe Database Workflow
Imagine a workflow where you need to fetch a user, then use their ID to fetch their account, and finally use the account details to get their balance. Any of these steps could fail (e.g., return an empty Optional). With flatMap, the chain becomes clean and safe.
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.Monad;
import org.higherkindedj.hkt.optional.OptionalMonad;
import java.util.Optional;
import static org.higherkindedj.hkt.optional.OptionalKindHelper.OPTIONAL;
// Mock data records and repository functions from the previous example...
record User(int id, String name) {}
record Account(int userId, String accountId) {}
public Kind<Optional.Witness, User> findUser(int id) { /* ... */ }
public Kind<Optional.Witness, Account> findAccount(User user) { /* ... */ }
public Kind<Optional.Witness, Double> getBalance(Account account) { /* ... */ }
// --- Get the Monad instance for Optional ---
Monad<Optional.Witness> monad = OptionalMonad.INSTANCE;
// --- Scenario 1: Successful workflow ---
Kind<Optional.Witness, Double> balanceSuccess = monad.flatMap(user ->
monad.flatMap(account ->
getBalance(account),
findAccount(user)),
findUser(1));
// Result: Optional[1000.0]
System.out.println(OPTIONAL.narrow(balanceSuccess));
// --- Scenario 2: Failing workflow (user not found) ---
Kind<Optional.Witness, Double> balanceFailure = monad.flatMap(user ->
/* this part is never executed */
monad.flatMap(account -> getBalance(account), findAccount(user)),
findUser(2)); // This returns Optional.empty()
// The chain short-circuits immediately.
// Result: Optional.empty
System.out.println(OPTIONAL.narrow(balanceFailure));
The flatMap chain elegantly handles the "happy path" while also providing robust, short-circuiting logic for the failure cases, all without a single null check.
Utility Methods
Monad also provides default methods for common tasks like debugging, conditional logic, and transforming results.
flatMapIfOrElse
This is the type-safe way to perform conditional branching in a monadic chain. It applies one of two functions based on a predicate, ensuring that both paths result in the same final type and avoiding runtime errors.
Let's imagine we only want to fetch accounts for "standard" users (ID < 100).
// --- Get the Monad instance for Optional ---
Monad<Optional.Witness> monad = OptionalMonad.INSTANCE;
// A user who meets the condition
Kind<Optional.Witness, User> standardUser = OPTIONAL.widen(Optional.of(new User(1, "Alice")));
// A user who does not
Kind<Optional.Witness, User> premiumUser = OPTIONAL.widen(Optional.of(new User(101, "Bob")));
// --- Scenario 1: Predicate is true ---
Kind<Optional.Witness, Account> resultSuccess = monad.flatMapIfOrElse(
user -> user.id() < 100, // Predicate: user is standard
user -> findAccount(user), // Action if true: find their account
user -> OPTIONAL.widen(Optional.empty()), // Action if false: return empty
standardUser
);
// Result: Optional[Account[userId=1, accountId=acc-123]]
System.out.println(OPTIONAL.narrow(resultSuccess));
// --- Scenario 2: Predicate is false ---
Kind<Optional.Witness, Account> resultFailure = monad.flatMapIfOrElse(
user -> user.id() < 100,
user -> findAccount(user),
user -> OPTIONAL.widen(Optional.empty()), // This path is taken
premiumUser
);
// Result: Optional.empty
System.out.println(OPTIONAL.narrow(resultFailure));
as
Replaces the value inside a monad while preserving its effect (e.g., success or failure). This is useful when you only care that an operation succeeded, not what its result was.
// After finding a user, we just want a confirmation message.
Kind<Optional.Witness, String> successMessage = monad.as("User found successfully", findUser(1));
// Result: Optional["User found successfully"]
System.out.println(OPTIONAL.narrow(successMessage));
// If the user isn't found, the effect (empty Optional) is preserved.
Kind<Optional.Witness, String> failureMessage = monad.as("User found successfully", findUser(99));
// Result: Optional.empty
System.out.println(OPTIONAL.narrow(failureMessage));
peek
Allows you to perform a side-effect (like logging) on the value inside a monad without altering the flow. The original monadic value is always returned.
// Log the user's name if they are found
Kind<Optional.Witness, User> peekSuccess = monad.peek(
user -> System.out.println("LOG: Found user -> " + user.name()),
findUser(1)
);
// Console output: LOG: Found user -> Alice
// Result: Optional[User[id=1, name=Alice]] (The original value is unchanged)
System.out.println("Return value: " + OPTIONAL.narrow(peekSuccess));
// If the user isn't found, the action is never executed.
Kind<Optional.Witness, User> peekFailure = monad.peek(
user -> System.out.println("LOG: Found user -> " + user.name()),
findUser(99)
);
// Console output: (nothing)
// Result: Optional.empty
System.out.println("Return value: " + OPTIONAL.narrow(peekFailure));
Combining Multiple Monadic Values: flatMapN 🔄
Just as Applicative provides map2, map3, etc. for combining independent computations with a pure function, Monad provides flatMap2, flatMap3, flatMap4, and flatMap5 for combining multiple monadic values where the combining function itself returns a monadic value.
These methods are perfect when you need to:
- Sequence multiple independent computations and then perform a final effectful operation
- Validate multiple pieces of data together with an operation that may fail
- Combine results from multiple sources with additional logic that may produce effects
flatMap2
Combines two monadic values and applies a function that returns a new monadic value.
Example: Validating and Combining Two Database Results
import java.util.Optional;
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.Monad;
import org.higherkindedj.hkt.optional.OptionalMonad;
import static org.higherkindedj.hkt.optional.OptionalKindHelper.OPTIONAL;
record User(int id, String name) {}
record Order(int userId, String item) {}
record UserOrder(User user, Order order) {}
// Mock repository functions
public Kind<Optional.Witness, User> findUser(int id) { /* ... */ }
public Kind<Optional.Witness, Order> findOrder(int orderId) { /* ... */ }
// Validation function that might fail
public Kind<Optional.Witness, UserOrder> validateAndCombine(User user, Order order) {
if (order.userId() != user.id()) {
return OPTIONAL.widen(Optional.empty()); // Validation failed
}
return OPTIONAL.widen(Optional.of(new UserOrder(user, order)));
}
Monad<Optional.Witness> monad = OptionalMonad.INSTANCE;
// Combine user and order, then validate
Kind<Optional.Witness, UserOrder> result = monad.flatMap2(
findUser(1),
findOrder(100),
(user, order) -> validateAndCombine(user, order)
);
// Result: Optional[UserOrder[...]] if valid, Optional.empty if any step fails
System.out.println(OPTIONAL.narrow(result));
flatMap3 and Higher Arities
For more complex scenarios, you can combine three, four, or five monadic values:
record Product(int id, String name, double price) {}
record Inventory(int productId, int quantity) {}
public Kind<Optional.Witness, Product> findProduct(int id) { /* ... */ }
public Kind<Optional.Witness, Inventory> checkInventory(int productId) { /* ... */ }
// Process an order with user, product, and inventory check
Kind<Optional.Witness, String> orderResult = monad.flatMap3(
findUser(1),
findProduct(100),
checkInventory(100),
(user, product, inventory) -> {
if (inventory.quantity() <= 0) {
return OPTIONAL.widen(Optional.empty()); // Out of stock
}
String confirmation = String.format(
"Order confirmed for %s: %s (qty: %d)",
user.name(), product.name(), inventory.quantity()
);
return OPTIONAL.widen(Optional.of(confirmation));
}
);
flatMapN vs mapN
The key difference between flatMapN and mapN is:
mapN(from Applicative): The combining function returns a pure value ((A, B) -> C)flatMapN(from Monad): The combining function returns a monadic value ((A, B) -> Kind<M, C>)
This makes flatMapN methods ideal when the combination of values needs to perform additional effects, such as:
- Additional validation that might fail
- Database lookups based on combined criteria
- Computations that may produce side effects
- Operations that need to maintain monadic context
// mapN: Pure combination
Kind<Optional.Witness, String> mapResult = monad.map2(
findUser(1),
findOrder(100),
(user, order) -> user.name() + " ordered " + order.item() // Pure function
);
// flatMapN: Effectful combination
Kind<Optional.Witness, String> flatMapResult = monad.flatMap2(
findUser(1),
findOrder(100),
(user, order) -> validateAndProcess(user, order) // Returns Optional
);
This pattern is especially powerful when combined with error-handling monads like Either or Try, where the combining function can itself fail with a meaningful error.
MonadError: Handling Errors Gracefully
While a Monad is excellent for sequencing operations that might fail (like with Optional or Either), it doesn't provide a standardised way to inspect or recover from those failures. The MonadError type class fills this gap.
It's a specialised Monad that has a defined error type E, giving you a powerful and abstract API for raising and handling errors within any monadic workflow.
What is it?
A MonadError is a Monad that provides two additional, fundamental operations for working with failures:
raiseError(E error): This allows you to construct a failed computation by lifting an error valueEdirectly into the monadic context.handleErrorWith(Kind<F, A> fa, ...): This is the recovery mechanism. It allows you to inspect a potential failure and provide a fallback computation to rescue the workflow.
By abstracting over a specific error type E, MonadError allows you to write generic, resilient code that can work with any data structure capable of representing failure, such as Either<E, A>, Try<A> (where E is Throwable), or even custom error-handling monads.
The interface for MonadError in hkj-api extends Monad:
@NullMarked
public interface MonadError<F, E> extends Monad<F> {
<A> @NonNull Kind<F, A> raiseError(@Nullable final E error);
<A> @NonNull Kind<F, A> handleErrorWith(
final Kind<F, A> ma,
final Function<? super E, ? extends Kind<F, A>> handler);
// Default recovery methods like handleError, recover, etc. are also provided
default <A> @NonNull Kind<F, A> handleError(
final Kind<F, A> ma,
final Function<? super E, ? extends A> handler) {
return handleErrorWith(ma, error -> of(handler.apply(error)));
}
}
Why is it useful?
MonadError formalises the pattern of "try-catch" in a purely functional way. It lets you build complex workflows that need to handle specific types of errors without coupling your logic to a concrete implementation like Either or Try. You can write a function once, and it will work seamlessly with any data type that has a MonadError instance.
This is incredibly useful for building robust applications, separating business logic from error-handling logic, and providing sensible fallbacks when operations fail.
Example: A Resilient Division Workflow
Let's model a division operation that can fail with a specific error message. We'll use Either<String, A> as our data type, which is a perfect fit for MonadError.
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.either.Either;
import org.higherkindedj.hkt.either.EitherMonad;
import static org.higherkindedj.hkt.either.EitherKindHelper.EITHER;
// --- Get the MonadError instance for Either<String, ?> ---
MonadError<Either.Witness<String>, String> monadError = EitherMonad.instance();
// A function that performs division, raising a specific error on failure
public Kind<Either.Witness<String>, Integer> safeDivide(int a, int b) {
if (b == 0) {
return monadError.raiseError("Cannot divide by zero!");
}
return monadError.of(a / b);
}
// --- Scenario 1: A successful division ---
Kind<Either.Witness<String>, Integer> success = safeDivide(10, 2);
// Result: Right(5)
System.out.println(EITHER.narrow(success));
// --- Scenario 2: A failed division ---
Kind<Either.Witness<String>, Integer> failure = safeDivide(10, 0);
// Result: Left(Cannot divide by zero!)
System.out.println(EITHER.narrow(failure));
// --- Scenario 3: Recovering from the failure ---
// We can use handleErrorWith to catch the error and return a fallback value.
Kind<Either.Witness<String>, Integer> recovered = monadError.handleErrorWith(
failure,
errorMessage -> {
System.out.println("Caught an error: " + errorMessage);
return monadError.of(0); // Recover with a default value of 0
}
);
// Result: Right(0)
System.out.println(EITHER.narrow(recovered));
In this example, raiseError allows us to create the failure case in a clean, declarative way, while handleErrorWith provides a powerful mechanism for recovery, making our code more resilient and predictable.
Semigroup and Monoid: Foundational Type Classes
- The fundamental building blocks for combining data: Semigroup and Monoid
- How associative operations enable parallel and sequential data processing
- Using Monoids for error accumulation in validation scenarios
- Practical applications with String concatenation, integer addition, and boolean operations
- Advanced Monoid operations: combining collections, repeated application, and identity testing
- Working with numeric types: Long and Double monoid instances
- Optional-based monoids for data aggregation: first, last, maximum, and minimum
- How these abstractions power Foldable operations and validation workflows
In functional programming, we often use type classes to define common behaviours that can be applied to a wide range of data types. These act as interfaces that allow us to write more abstract and reusable code. In higher-kinded-j, we provide a number of these type classes to enable powerful functional patterns.
Here we will cover two foundational type classes: Semigroup and Monoid. Understanding these will give you a solid foundation for many of the more advanced concepts in the library.
Semigroup<A>
A Semigroup is one of the simplest and most fundamental type classes. It provides a blueprint for types that have a single, associative way of being combined.
What is it?
A Semigroup is a type class for any data type that has a combine operation. This operation takes two values of the same type and merges them into a single value of that type. The only rule is that this operation must be associative.
This means that for any values a, b, and c:
(a.combine(b)).combine(c) must be equal to a.combine(b.combine(c))
The interface for Semigroup in hkj-api is as follows:
public interface Semigroup<A> {
A combine(A a1, A a2);
}
Common Instances: The Semigroups Utility
To make working with Semigroup easier, higher-kinded-j provides a Semigroups utility interface with static factory methods for common instances.
// Get a Semigroup for concatenating Strings
Semigroup<String> stringConcat = Semigroups.string();
// Get a Semigroup for concatenating Strings with a delimiter
Semigroup<String> stringConcatDelimited = Semigroups.string(", ");
// Get a Semigroup for concatenating Lists
Semigroup<List<Integer>> listConcat = Semigroups.list();
Where is it used in higher-kinded-j?
The primary and most powerful use case for Semigroup in this library is to enable error accumulation with the Validated data type.
When you use the Applicative instance for Validated, you must provide a Semigroup for the error type. This tells the applicative how to combine errors when multiple invalid computations occur.
Example: Accumulating Validation Errors
// Create an applicative for Validated that accumulates String errors by joining them.
Applicative<Validated.Witness<String>> applicative =
ValidatedMonad.instance(Semigroups.string("; "));
// Two invalid results
Validated<String, Integer> invalid1 = Validated.invalid("Field A is empty");
Validated<String, Integer> invalid2 = Validated.invalid("Field B is not a number");
// Combine them using the applicative's map2 method
Kind<Validated.Witness<String>, Integer> result =
applicative.map2(
VALIDATED.widen(invalid1),
VALIDATED.widen(invalid2),
(val1, val2) -> val1 + val2
);
// The errors are combined using our Semigroup
// Result: Invalid("Field A is empty; Field B is not a number")
System.out.println(VALIDATED.narrow(result));
Monoid<A>
A Monoid is a Semigroup with a special "identity" or "empty" element. This makes it even more powerful, as it provides a way to have a "starting" or "default" value.
What is it?
A Monoid is a type class for any data type that has an associative combine operation (from Semigroup) and an empty value. This empty value is a special element that, when combined with any other value, returns that other value.
This is known as the identity law. For any value a:
a.combine(empty()) must be equal to a``empty().combine(a) must be equal to a
The interface for Monoid in hkj-api extends Semigroup:
public interface Monoid<A> extends Semigroup<A> {
A empty();
}
Common Instances: The Monoids Utility
Similar to Semigroups, the library provides a Monoids utility interface for creating common instances.
// Get a Monoid for integer addition (empty = 0)
Monoid<Integer> intAddition = Monoids.integerAddition();
// Get a Monoid for String concatenation (empty = "")
Monoid<String> stringMonoid = Monoids.string();
// Get a Monoid for boolean AND (empty = true)
Monoid<Boolean> booleanAnd = Monoids.booleanAnd();
Where it is used in higher-kinded-j
A Monoid is essential for folding (or reducing) a data structure. The empty element provides a safe starting value, which means you can correctly fold a collection that might be empty.
This is formalised in the Foldable typeclass, which has a foldMap method. This method maps every element in a structure to a monoidal type and then combines all the results.
Example: Using foldMap with different Monoids
List<Integer> numbers = List.of(1, 2, 3, 4, 5);
Kind<ListKind.Witness, Integer> numbersKind = LIST.widen(numbers);
// 1. Sum the list using the integer addition monoid
Integer sum = ListTraverse.INSTANCE.foldMap(
Monoids.integerAddition(),
Function.identity(),
numbersKind
); // Result: 15
// 2. Concatenate the numbers as strings
String concatenated = ListTraverse.INSTANCE.foldMap(
Monoids.string(),
String::valueOf,
numbersKind
); // Result: "12345"
Advanced Monoid Operations
The Monoid interface provides several powerful default methods that build upon the basic combine and empty operations. These methods handle common aggregation patterns and make working with collections much more convenient.
combineAll: Aggregating Collections
The combineAll method takes an iterable collection and combines all its elements using the monoid's operation. If the collection is empty, it returns the identity element.
Monoid<Integer> sum = Monoids.integerAddition();
List<Integer> salesData = List.of(120, 450, 380, 290);
Integer totalSales = sum.combineAll(salesData);
// Result: 1240
// Works safely with empty collections
Integer emptyTotal = sum.combineAll(Collections.emptyList());
// Result: 0 (the empty value)
This is particularly useful for batch processing scenarios where you need to aggregate data from multiple sources:
// Combining log messages
Monoid<String> logMonoid = Monoids.string();
List<String> logMessages = loadLogMessages();
String combinedLog = logMonoid.combineAll(logMessages);
// Merging configuration sets
Monoid<Set<String>> configMonoid = Monoids.set();
List<Set<String>> featureFlags = List.of(
Set.of("feature-a", "feature-b"),
Set.of("feature-b", "feature-c"),
Set.of("feature-d")
);
Set<String> allFlags = configMonoid.combineAll(featureFlags);
// Result: ["feature-a", "feature-b", "feature-c", "feature-d"]
combineN: Repeated Application
The combineN method combines a value with itself n times. This is useful for scenarios where you need to apply the same value repeatedly:
Monoid<Integer> product = Monoids.integerMultiplication();
// Calculate 2^5 using multiplication monoid
Integer result = product.combineN(2, 5);
// Result: 32 (2 * 2 * 2 * 2 * 2)
// Repeat a string pattern
Monoid<String> stringMonoid = Monoids.string();
String border = stringMonoid.combineN("=", 50);
// Result: "=================================================="
// Build a list with repeated elements
Monoid<List<String>> listMonoid = Monoids.list();
List<String> repeated = listMonoid.combineN(List.of("item"), 3);
// Result: ["item", "item", "item"]
Special cases:
- When
n = 0, returns the empty value - When
n = 1, returns the value unchanged - When
n < 0, throwsIllegalArgumentException
isEmpty: Identity Testing
The isEmpty method tests whether a given value equals the identity element of the monoid:
Monoid<Integer> sum = Monoids.integerAddition();
Monoid<Integer> product = Monoids.integerMultiplication();
sum.isEmpty(0); // true (0 is the identity for addition)
sum.isEmpty(5); // false
product.isEmpty(1); // true (1 is the identity for multiplication)
product.isEmpty(0); // false
Monoid<String> stringMonoid = Monoids.string();
stringMonoid.isEmpty(""); // true
stringMonoid.isEmpty("text"); // false
This is particularly useful for optimisation and conditional logic:
public void processIfNotEmpty(Monoid<String> monoid, String value) {
if (!monoid.isEmpty(value)) {
// Only process non-empty values
performExpensiveOperation(value);
}
}
Working with Numeric Types
The Monoids utility provides comprehensive support for numeric operations beyond just Integer. This is particularly valuable for financial calculations, statistical operations, and scientific computing.
Long Monoids
For working with large numeric values or high-precision calculations:
// Long addition for counting large quantities
Monoid<Long> longSum = Monoids.longAddition();
List<Long> userCounts = List.of(1_500_000L, 2_300_000L, 890_000L);
Long totalUsers = longSum.combineAll(userCounts);
// Result: 4,690,000
// Long multiplication for compound calculations
Monoid<Long> longProduct = Monoids.longMultiplication();
Long compound = longProduct.combineN(2L, 20);
// Result: 1,048,576 (2^20)
Double Monoids
For floating-point arithmetic and statistical computations:
// Double addition for financial calculations
Monoid<Double> dollarSum = Monoids.doubleAddition();
List<Double> expenses = List.of(49.99, 129.50, 89.99);
Double totalExpenses = dollarSum.combineAll(expenses);
// Result: 269.48
// Double multiplication for compound interest
Monoid<Double> growth = Monoids.doubleMultiplication();
Double interestRate = 1.05; // 5% per year
Double compoundGrowth = growth.combineN(interestRate, 10);
// Result: ≈1.629 (after 10 years)
Practical Example: Statistical Calculations
public class Statistics {
public static double calculateMean(List<Double> values) {
if (values.isEmpty()) {
throw new IllegalArgumentException("Cannot calculate mean of an empty list.");
}
Monoid<Double> sum = Monoids.doubleAddition();
Double total = sum.combineAll(values);
return total / values.size();
}
public static double calculateProduct(List<Double> factors) {
Monoid<Double> product = Monoids.doubleMultiplication();
return product.combineAll(factors);
}
}
// Usage
List<Double> measurements = List.of(23.5, 24.1, 23.8, 24.3);
double average = Statistics.calculateMean(measurements);
// Result: 23.925
Optional Monoids for Data Aggregation
One of the most powerful features of the Monoids utility is its support for Optional-based aggregation. These monoids elegantly handle the common pattern of finding the "best" value from a collection of optional results.
firstOptional and lastOptional
These monoids select the first or last non-empty optional value, making them perfect for fallback chains and priority-based selection:
Monoid<Optional<String>> first = Monoids.firstOptional();
Monoid<Optional<String>> last = Monoids.lastOptional();
List<Optional<String>> configs = List.of(
Optional.empty(), // Missing config
Optional.of("default.conf"), // Found!
Optional.of("user.conf") // Also found
);
// Get first available configuration
Optional<String> primaryConfig = first.combineAll(configs);
// Result: Optional["default.conf"]
// Get last available configuration
Optional<String> latestConfig = last.combineAll(configs);
// Result: Optional["user.conf"]
Practical Example: Configuration Fallback Chain
public class ConfigLoader {
public Optional<Config> loadConfig() {
Monoid<Optional<Config>> firstAvailable = Monoids.firstOptional();
return firstAvailable.combineAll(List.of(
loadFromEnvironment(), // Try environment variables first
loadFromUserHome(), // Then user's home directory
loadFromWorkingDir(), // Then current directory
loadDefaultConfig() // Finally, use defaults
));
}
private Optional<Config> loadFromEnvironment() {
return Optional.ofNullable(System.getenv("APP_CONFIG"))
.map(this::parseConfig);
}
private Optional<Config> loadFromUserHome() {
Path userConfig = Paths.get(System.getProperty("user.home"), ".apprc");
return Files.exists(userConfig)
? Optional.of(parseConfigFile(userConfig))
: Optional.empty();
}
// ... other loaders
}
maximum and minimum
These monoids find the maximum or minimum value from a collection of optional values. They work with any Comparable type or accept a custom Comparator:
Monoid<Optional<Integer>> max = Monoids.maximum();
Monoid<Optional<Integer>> min = Monoids.minimum();
List<Optional<Integer>> scores = List.of(
Optional.of(85),
Optional.empty(), // Missing data
Optional.of(92),
Optional.of(78),
Optional.empty()
);
Optional<Integer> highestScore = max.combineAll(scores);
// Result: Optional[92]
Optional<Integer> lowestScore = min.combineAll(scores);
// Result: Optional[78]
Using Custom Comparators
For more complex types, you can provide a custom comparator:
public record Product(String name, double price) {}
// Find most expensive product
Monoid<Optional<Product>> mostExpensive =
Monoids.maximum(Comparator.comparing(Product::price));
List<Optional<Product>> products = List.of(
Optional.of(new Product("Widget", 29.99)),
Optional.empty(),
Optional.of(new Product("Gadget", 49.99)),
Optional.of(new Product("Gizmo", 19.99))
);
Optional<Product> priciest = mostExpensive.combineAll(products);
// Result: Optional[Product("Gadget", 49.99)]
// Find product with shortest name
Monoid<Optional<Product>> shortestName =
Monoids.minimum(Comparator.comparing(p -> p.name().length()));
Optional<Product> shortest = shortestName.combineAll(products);
// Result: Optional[Product("Gizmo", 19.99)]
Practical Example: Finding Best Offers
public class PriceComparison {
public record Offer(String vendor, BigDecimal price, boolean inStock)
implements Comparable<Offer> {
@Override
public int compareTo(Offer other) {
return this.price.compareTo(other.price);
}
}
public Optional<Offer> findBestOffer(List<String> vendors, String productId) {
Monoid<Optional<Offer>> cheapest = Monoids.minimum();
List<Optional<Offer>> offers = vendors.stream()
.map(vendor -> fetchOffer(vendor, productId))
.filter(opt -> opt.map(Offer::inStock).orElse(false)) // Only in-stock items
.collect(Collectors.toList());
return cheapest.combineAll(offers);
}
private Optional<Offer> fetchOffer(String vendor, String productId) {
// API call to get offer from vendor
// Returns Optional.empty() if unavailable
}
}
When Both Optionals are Empty
It's worth noting that these monoids handle empty collections gracefully:
Monoid<Optional<Integer>> max = Monoids.maximum();
List<Optional<Integer>> allEmpty = List.of(
Optional.empty(),
Optional.empty()
);
Optional<Integer> result = max.combineAll(allEmpty);
// Result: Optional.empty()
// Also works with empty list
Optional<Integer> emptyResult = max.combineAll(Collections.emptyList());
// Result: Optional.empty()
This makes them perfect for aggregation pipelines where you're not certain data will be present, but you want to find the best available value if any exists.
Conclusion
Semigroups and Monoids are deceptively simple abstractions that unlock powerful patterns for data combination and aggregation. By understanding these type classes, you gain:
- Composability: Build complex aggregations from simple, reusable pieces
- Type Safety: Let the compiler ensure your combinations are valid
- Flexibility: Swap monoids to get different behaviours from the same code
- Elegance: Express data aggregation intent clearly and concisely
The new utility methods (combineAll, combineN, isEmpty) and expanded instance library (numeric types, Optional-based aggregations) make these abstractions even more practical for everyday Java development.
Further Reading:
- Foldable and Traverse - See how Monoids power folding operations
- Applicative - Learn how Semigroups enable error accumulation with Validated
- Java Optional Documentation
Foldable & Traverse: Reducing a Structure to a Summary
- How to reduce any data structure to a summary value using
foldMap - The power of swapping Monoids to get different aggregations from the same data
- Turning effects "inside-out" with
traverseoperations - Validating entire collections and collecting all errors at once
- The relationship between
sequenceandtraversefor effectful operations
The Foldable typeclass represents one of the most common and powerful patterns in functional programming: reducing a data structure to a single summary value. If you've ever calculated the sum of a list of numbers or concatenated a list of strings, you've performed a fold.
Foldable abstracts this pattern, allowing you to write generic code that can aggregate any data structure that knows how to be folded.
What is it?
A typeclass is Foldable if it can be "folded up" from left to right into a summary value. The key to this process is the Monoid, which provides two essential things:
- An
emptyvalue to start with (e.g.,0for addition). - A
combineoperation to accumulate the results (e.g.,+).
The core method of the Foldable typeclass is foldMap.
The foldMap Method
foldMap is a powerful operation that does two things in one step:
- It maps each element in the data structure to a value of a monoidal type.
- It folds (combines) all of those monoidal values into a final result.
The interface for Foldable in hkj-api is as follows:
public interface Foldable<F> {
<A, M> M foldMap(
Monoid<M> monoid,
Function<? super A, ? extends M> f,
Kind<F, A> fa
);
}
Why is it useful?
Foldable allows you to perform powerful aggregations on any data structure without needing to know its internal representation. By simply swapping out the Monoid, you can get completely different summaries from the same data.
Let's see this in action with List, which has a Foldable instance provided by ListTraverse.
Example: Aggregating a List with Different Monoids
// Our data
List<Integer> numbers = List.of(1, 2, 3, 4, 5);
Kind<ListKind.Witness, Integer> numbersKind = LIST.widen(numbers);
// Our Foldable instance for List
Foldable<ListKind.Witness> listFoldable = ListTraverse.INSTANCE;
// --- Scenario 1: Sum the numbers ---
// We use the integer addition monoid (empty = 0, combine = +)
Integer sum = listFoldable.foldMap(
Monoids.integerAddition(),
Function.identity(), // Map each number to itself
numbersKind
);
// Result: 15
// --- Scenario 2: Check if all numbers are positive ---
// We map each number to a boolean and use the "AND" monoid (empty = true, combine = &&)
Boolean allPositive = listFoldable.foldMap(
Monoids.booleanAnd(),
num -> num > 0,
numbersKind
);
// Result: true
// --- Scenario 3: Convert to strings and concatenate ---
// We map each number to a string and use the string monoid (empty = "", combine = +)
String asString = listFoldable.foldMap(
Monoids.string(),
String::valueOf,
numbersKind
);
// Result: "12345"
As you can see, foldMap provides a single, abstract way to perform a wide variety of aggregations, making your code more declarative and reusable.
Traverse: Effectful Folding
The Traverse typeclass is a powerful extension of Foldable and Functor. It allows you to iterate over a data structure, but with a twist: at each step, you can perform an effectful action and then collect all the results back into a single effect.
This is one of the most useful typeclasses for real-world applications, as it elegantly handles scenarios like validating all items in a list, fetching data for each ID in a collection, and much more.
What is it?
A typeclass is Traverse if it can be "traversed" from left to right. The key to this process is the Applicative, which defines how to sequence the effects at each step.
The core method of the Traverse typeclass is traverse.
The traverse Method
The traverse method takes a data structure and a function that maps each element to an effectful computation (wrapped in an Applicative like Validated, Optional, or Either). It then runs these effects in sequence and collects the results.
The true power of traverse is that it can turn a structure of effects "inside-out". For example, it can transform a List<Validated<E, A>> into a single Validated<E, List<A>>.
The interface for Traverse in hkj-api extends Functor and Foldable:
Java
public interface Traverse<T> extends Functor<T>, Foldable<T> {
<F, A, B> Kind<F, Kind<T, B>> traverse(
Applicative<F> applicative,
Function<A, Kind<F, B>> f,
Kind<T, A> ta
);
//... sequenceA method also available
}
Why is it useful?
Traverse abstracts away the boilerplate of iterating over a collection, performing a failable action on each element, and then correctly aggregating the results.
Example: Validating a List of Promo Codes
Imagine you have a list of promo codes, and you want to validate each one. Your validation function returns a Validated<String, PromoCode>. Without traverse, you'd have to write a manual loop, collect all the errors, and handle the logic yourself.
With traverse, this becomes a single, elegant expression.
Java
// Our validation function
public Kind<Validated.Witness<String>, String> validateCode(String code) {
if (code.startsWith("VALID")) {
return VALIDATED.widen(Validated.valid(code));
}
return VALIDATED.widen(Validated.invalid("'" + code + "' is not a valid code"));
}
// Our data
List<String> codes = List.of("VALID-A", "EXPIRED", "VALID-B", "INVALID");
Kind<ListKind.Witness, String> codesKind = LIST.widen(codes);
// The Applicative for Validated, using a Semigroup to join errors
Applicative<Validated.Witness<String>> validatedApplicative =
ValidatedMonad.instance(Semigroups.string("; "));
// --- Traverse the list ---
Kind<Validated.Witness<String>, Kind<ListKind.Witness, String>> result =
ListTraverse.INSTANCE.traverse(
validatedApplicative,
this::validateCode,
codesKind
);
// The result is a single Validated instance with accumulated errors.
// Result: Invalid("'EXPIRED' is not a valid code; 'INVALID' is not a valid code")
System.out.println(VALIDATED.narrow(result));
sequenceA
Traverse also provides sequenceA, which is a specialised version of traverse. It's used when you already have a data structure containing effects (e.g., a List<Optional<A>>) and you want to flip it into a single effect containing the data structure (Optional<List<A>>).
MonadZero
The MonadZero is a more advanced type class that extends both Monad and Alternative to combine the power of monadic bind with choice operations. It includes the concept of a "zero" or "empty" element and is designed for monads that can represent failure, absence, or emptiness, allowing them to be used in filtering operations and alternative chains.
The interface for MonadZero in hkj-api extends Monad and Alternative:
public interface MonadZero<F> extends Monad<F>, Alternative<F> {
<A> Kind<F, A> zero();
@Override
default <A> Kind<F, A> empty() {
return zero();
}
}
Why is it useful?
A Monad provides a way to sequence computations within a context (flatMap, map, of). An Alternative provides choice and failure operations (empty(), orElse()). A MonadZero combines both:
zero(): Returns the "empty" or "zero" element for the monad (implementsempty()from Alternative).orElse(): Combines two alternatives (inherited from Alternative).guard(): Conditional success helper (inherited from Alternative).
This zero element acts as an absorbing element in a monadic sequence, similar to how multiplying by zero results in zero. If a computation results in a zero, subsequent operations in the chain are typically skipped.
MonadZero is particularly useful for making for-comprehensions more powerful. When you are working with a monad that has a MonadZero instance, you can use a when() clause to filter results within the comprehension.
Key Implementations in this Project:
- For List,
zero()returns an empty list[]. - For Maybe,
zero()returnsNothing. - For Optional,
zero()returnsOptional.empty().
Primary Uses
The main purpose of MonadZero is to enable filtering within monadic comprehensions. It allows you to discard results that don't meet a certain criterion.
1. Filtering in For-Comprehensions
As already mentioned the most powerful application in this codebase is within the For comprehension builder. The builder has two entry points:
For.from(monad, ...): For any standardMonad.For.from(monadZero, ...): An overloaded version specifically for aMonadZero.
Only the version that accepts a MonadZero provides the .when(predicate) filtering step. When the predicate in a .when() clause evaluates to false, the builder internally calls monad.zero() to terminate that specific computational path.
2. Generic Functions
It allows you to write generic functions that can operate over any monad that has a concept of "failure" or "emptiness," such as List, Maybe, or Optional.
Code Example: For Comprehension with ListMonad
The following example demonstrates how MonadZero enables filtering.
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.expression.For;
import org.higherkindedj.hkt.list.ListKind;
import org.higherkindedj.hkt.list.ListMonad;
import java.util.Arrays;
import java.util.List;
import static org.higherkindedj.hkt.list.ListKindHelper.LIST;
// 1. Get the MonadZero instance for List
final ListMonad listMonad = ListMonad.INSTANCE;
// 2. Define the initial data sources
final Kind<ListKind.Witness, Integer> list1 = LIST.widen(Arrays.asList(1, 2, 3));
final Kind<ListKind.Witness, Integer> list2 = LIST.widen(Arrays.asList(10, 20));
// 3. Build the comprehension using the filterable 'For'
final Kind<ListKind.Witness, String> result =
For.from(listMonad, list1) // Start with a MonadZero
.from(a -> list2) // Generator (flatMap)
.when(t -> (t._1() + t._2()) % 2 != 0) // Filter: if the sum is odd
.let(t -> "Sum: " + (t._1() + t._2())) // Binding (map)
.yield((a, b, c) -> a + " + " + b + " = " + c); // Final projection
// 4. Unwrap the result
final List<String> narrow = LIST.narrow(result);
System.out.println("Result of List comprehension: " + narrow);
Explanation:
- The comprehension iterates through all pairs of
(a, b)fromlist1andlist2. - The
.when(...)clause checks if the suma + bis odd. - If the sum is even, the
monad.zero()method (which returns an empty list) is invoked for that path, effectively discarding it. - If the sum is odd, the computation continues to the
.let()and.yield()steps.
Output:
Result of List comprehension: [1 + 10 = Sum: 11, 1 + 20 = Sum: 21, 3 + 10 = Sum: 13, 3 + 20 = Sum: 23]
Selective: Conditional Effects
- How Selective bridges the gap between Applicative and Monad
- Conditional effect execution without full monadic power
- Using
select,whenS, andifSfor static branching - Building robust workflows with compile-time visible alternatives
- Combining multiple alternatives with
orElse - Real-world examples of conditional effect execution
You've seen how Applicative lets you combine independent computations and how Monad lets you chain dependent computations. The Selective type class sits precisely between them, providing a powerful middle ground: conditional effects with static structure.
What is it?
A Selective functor extends Applicative with the ability to conditionally apply effects based on the result of a previous computation. Unlike Monad, which allows arbitrary dynamic choice of effects, Selective provides a more restricted form of conditional execution where all possible branches must be provided upfront.
This static structure enables:
- Static analysis: All possible execution paths are visible at construction time
- Optimisation: Compilers and runtimes can analyse and potentially parallelise branches
- Conditional effects: Execute operations only when needed, without full monadic power
- Type-safe branching: All branches must produce the same result type
The interface for Selective in hkj-api extends Applicative:
@NullMarked
public interface Selective<F> extends Applicative<F> {
// Core operation
<A, B> Kind<F, B> select(Kind<F, Choice<A, B>> fab, Kind<F, Function<A, B>> ff);
// Derived operations
default <A> Kind<F, Unit> whenS(Kind<F, Boolean> fcond, Kind<F, Unit> fa) { ... }
default <A> Kind<F, A> ifS(Kind<F, Boolean> fcond, Kind<F, A> fthen, Kind<F, A> felse) { ... }
default <A, B, C> Kind<F, C> branch(Kind<F, Choice<A, B>> fab,
Kind<F, Function<A, C>> fl,
Kind<F, Function<B, C>> fr) { ... }
// ... and more
}
The Core Operation: select
The fundamental operation is select, which takes a Choice<A, B> (similar to Either) and a function:
- If the choice is
Left(a), the function is applied toato produceB - If the choice is
Right(b), the function is ignored andbis returned
Example: Conditional Validation
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.Selective;
import org.higherkindedj.hkt.maybe.MaybeSelective;
import org.higherkindedj.hkt.maybe.Maybe;
import static org.higherkindedj.hkt.maybe.MaybeKindHelper.MAYBE;
Selective<MaybeKind.Witness> selective = MaybeSelective.INSTANCE;
// A value that might need validation
Kind<MaybeKind.Witness, Choice<String, Integer>> input =
MAYBE.widen(Maybe.just(Selective.left("42"))); // Left: needs parsing
// Parser function (only applied if Choice is Left)
Kind<MaybeKind.Witness, Function<String, Integer>> parser =
MAYBE.widen(Maybe.just(s -> Integer.parseInt(s)));
Kind<MaybeKind.Witness, Integer> result = selective.select(input, parser);
// Result: Just(42)
// If input was already Right(42), parser would not be used
Kind<MaybeKind.Witness, Choice<String, Integer>> alreadyParsed =
MAYBE.widen(Maybe.just(Selective.right(42)));
Kind<MaybeKind.Witness, Integer> result2 = selective.select(alreadyParsed, parser);
// Result: Just(42) - parser was not applied
Conditional Effect Execution: whenS
The whenS operation is the primary way to conditionally execute effects. It takes an effectful boolean condition and an effect that returns Unit, executing the effect only if the condition is true.
Example: Conditional Logging
import org.higherkindedj.hkt.io.IOSelective;
import org.higherkindedj.hkt.io.IO;
import org.higherkindedj.hkt.Unit;
import static org.higherkindedj.hkt.io.IOKindHelper.IO_KIND;
Selective<IOKind.Witness> selective = IOSelective.INSTANCE;
// Check if debug mode is enabled
Kind<IOKind.Witness, Boolean> debugEnabled =
IO_KIND.widen(IO.delay(() -> Config.isDebugMode()));
// The logging effect (only executed if debug is enabled)
Kind<IOKind.Witness, Unit> logEffect =
IO_KIND.widen(IO.fromRunnable(() -> log.debug("Debug information")));
// Conditional execution
Kind<IOKind.Witness, Unit> maybeLog = selective.whenS(debugEnabled, logEffect);
// Run the IO
IO.narrow(maybeLog).unsafeRunSync();
// Only logs if Config.isDebugMode() returns true
whenS_: Discarding Results
When you have an effect that returns a value but you want to treat it as a Unit-returning operation, use whenS_:
// Database write returns row count, but we don't care about the value
Kind<IOKind.Witness, Integer> writeEffect =
IO_KIND.widen(IO.delay(() -> database.write(data)));
Kind<IOKind.Witness, Boolean> shouldPersist =
IO_KIND.widen(IO.delay(() -> config.shouldPersist()));
// Discard the Integer result, treat as Unit
Kind<IOKind.Witness, Unit> maybeWrite = selective.whenS_(shouldPersist, writeEffect);
Branching: ifS
The ifS operation provides if-then-else semantics for selective functors. Unlike monadic branching, both branches must be provided upfront.
Example: Configuration-Based Behaviour
import org.higherkindedj.hkt.either.EitherSelective;
import org.higherkindedj.hkt.either.Either;
import static org.higherkindedj.hkt.either.EitherKindHelper.EITHER;
Selective<EitherKind.Witness<String>> selective = EitherSelective.instance();
// Check environment
Kind<EitherKind.Witness<String>, Boolean> isProd =
EITHER.widen(Either.right(System.getenv("ENV").equals("production")));
// Production configuration
Kind<EitherKind.Witness<String>, Config> prodConfig =
EITHER.widen(Either.right(new Config("prod", 443, true)));
// Development configuration
Kind<EitherKind.Witness<String>, Config> devConfig =
EITHER.widen(Either.right(new Config("dev", 8080, false)));
// Select configuration based on environment
Kind<EitherKind.Witness<String>, Config> config =
selective.ifS(isProd, prodConfig, devConfig);
// Result: Either.right(Config("prod", 443, true)) if ENV=production
// Either.right(Config("dev", 8080, false)) otherwise
Trying Alternatives: orElse
The orElse operation tries multiple alternatives in sequence, returning the first successful result.
Example: Fallback Configuration Sources
import java.util.List;
Selective<OptionalKind.Witness> selective = OptionalSelective.INSTANCE;
// Try multiple configuration sources
List<Kind<OptionalKind.Witness, Choice<String, Config>>> sources = List.of(
// Try environment variables
OPTIONAL.widen(tryEnvConfig()),
// Try config file
OPTIONAL.widen(tryFileConfig()),
// Try default config
OPTIONAL.widen(Optional.of(Selective.right(defaultConfig())))
);
Kind<OptionalKind.Witness, Choice<String, Config>> result =
selective.orElse(sources);
// Returns the first successful Config, or the last error
Selective vs Applicative vs Monad
Understanding the differences helps you choose the right abstraction:
| Feature | Applicative | Selective | Monad |
|---|---|---|---|
| Combine independent effects | ✅ | ✅ | ✅ |
| Conditional effects | ❌ | ✅ | ✅ |
| Dynamic effect choice | ❌ | ❌ | ✅ |
| Static structure | ✅ | ✅ | ❌ |
| Error accumulation | ✅ (with Validated) | ✅ (with Validated) | ❌ |
| Analyse all branches | ✅ | ✅ | ❌ |
When to use Selective:
- You need conditional effects but want static analysis
- All branches should be known at construction time
- You want optimisation opportunities from visible alternatives
- You need something more powerful than Applicative but less than Monad
Example: Static vs Dynamic Choice
// Selective: Both branches visible at construction
Kind<F, A> selectiveChoice = selective.ifS(
condition,
branchA, // Known upfront
branchB // Known upfront
);
// Monad: Second computation depends on first result (dynamic)
Kind<M, B> monadicChoice = monad.flatMap(a -> {
if (a > 10) return computeX(a); // Not known until 'a' is available
else return computeY(a);
}, source);
Multi-Way Branching: branch
For more complex branching, branch handles both sides of a Choice with different handlers:
Kind<F, Choice<ErrorA, ErrorB>> input = ...; // Could be either error type
Kind<F, Function<ErrorA, String>> handleA =
selective.of(a -> "Error type A: " + a);
Kind<F, Function<ErrorB, String>> handleB =
selective.of(b -> "Error type B: " + b);
Kind<F, String> result = selective.branch(input, handleA, handleB);
// Applies the appropriate handler based on which error type
Chaining Conditional Functions: apS
For chaining multiple conditional functions, apS applies a list of functions sequentially to a value, propagating either the successful result or the first error. It's useful for building a pipeline of validation or transformation steps.
Example: Multi-Step Validation
Kind<F, Choice<Error, Data>> initialData = ...;
List<Kind<F, Function<Data, Choice<Error, Data>>>> validationSteps = List.of(
validateStep1,
validateStep2,
validateStep3
);
// Applies each validation step in order, short-circuiting on the first error.
Kind<F, Choice<Error, Data>> finalResult = selective.apS(initialData, validationSteps);
Real-World Example: Feature Flags
Scenario: Execute analytics tracking only if the feature flag is enabled.
import org.higherkindedj.hkt.io.IOSelective;
import org.higherkindedj.hkt.io.IO;
import static org.higherkindedj.hkt.io.IOKindHelper.IO_KIND;
public class AnalyticsService {
private final Selective<IOKind.Witness> selective = IOSelective.INSTANCE;
public Kind<IOKind.Witness, Unit> trackEvent(String eventName, User user) {
// Check feature flag (effectful)
Kind<IOKind.Witness, Boolean> flagEnabled =
IO_KIND.widen(IO.delay(() -> featureFlags.isEnabled("analytics")));
// The tracking effect (potentially expensive)
Kind<IOKind.Witness, Unit> trackingEffect =
IO_KIND.widen(IO.fromRunnable(() -> {
analytics.track(eventName, user.id(), user.properties());
log.info("Tracked event: " + eventName);
}));
// Only track if flag is enabled
return selective.whenS(flagEnabled, trackingEffect);
}
}
// Usage
AnalyticsService analytics = new AnalyticsService();
Kind<IOKind.Witness, Unit> trackingOperation =
analytics.trackEvent("user_signup", currentUser);
// Execute the IO
IO.narrow(trackingOperation).unsafeRunSync();
// Only sends analytics if feature flag is enabled
Implementations
Higher-Kinded-J provides Selective instances for:
Either<E, *>-EitherSelectiveMaybe-MaybeSelectiveOptional-OptionalSelectiveList-ListSelectiveIO-IOSelectiveReader<R, *>-ReaderSelectiveId-IdSelectiveValidated<E, *>-ValidatedSelective
Key Takeaways
- Selective sits between Applicative and Monad, providing conditional effects with static structure
selectis the core operation, conditionally applying a function based on aChoicewhenSenables conditional effect execution, perfect for feature flags and debug loggingifSprovides if-then-else semantics with both branches visible upfront- All branches are known at construction time, enabling static analysis and optimisation
- Use Selective when you need conditional effects but want to avoid full monadic power
Previous: MonadZero Next: Profunctor
Profunctor: Building Adaptable Data Pipelines
- How to build adaptable data transformation pipelines
- The dual nature of Profunctors: contravariant inputs and covariant outputs
- Using
lmap,rmap, anddimapto adapt functions for different contexts - Creating flexible API adapters and validation pipelines
- Real-world applications in data format transformation and system integration
So far, we've explored type classes that work with single type parameters—Functor, Applicative, and Monad all operate on types like F<A>. But what about types that take two parameters, like Function<A, B> or Either<L, R>? This is where Profunctors come in.
A Profunctor is a powerful abstraction for working with types that are contravariant in their first type parameter and covariant in their second. Think of it as a generalisation of how functions work: you can pre-process the input (contravariant) and post-process the output (covariant).
New to variance terminology? See the Glossary for detailed explanations of covariant, contravariant, and invariant with Java-focused examples.
What is a Profunctor?
A Profunctor is a type class for any type constructor P<A, B> that supports three key operations:
lmap: Map over the first (input) type parameter contravariantlyrmap: Map over the second (output) type parameter covariantlydimap: Map over both parameters simultaneously
The interface for Profunctor in hkj-api works with Kind2<P, A, B>:
@NullMarked
public interface Profunctor<P> {
// Map over the input (contravariant)
default <A, B, C> Kind2<P, C, B> lmap(
Function<? super C, ? extends A> f,
Kind2<P, A, B> pab) {
return dimap(f, Function.identity(), pab);
}
// Map over the output (covariant)
default <A, B, C> Kind2<P, A, C> rmap(
Function<? super B, ? extends C> g,
Kind2<P, A, B> pab) {
return dimap(Function.identity(), g, pab);
}
// Map over both input and output
<A, B, C, D> Kind2<P, C, D> dimap(
Function<? super C, ? extends A> f,
Function<? super B, ? extends D> g,
Kind2<P, A, B> pab);
}
The Canonical Example: Functions
The most intuitive example of a profunctor is the humble Function<A, B>. Functions are naturally:
- Contravariant in their input: If you have a function
String -> Integer, you can adapt it to work with any type that can be converted to aString - Covariant in their output: You can adapt the same function to produce any type that an
Integercan be converted to
Let's see this in action with FunctionProfunctor:
import static org.higherkindedj.hkt.func.FunctionKindHelper.FUNCTION;
import org.higherkindedj.hkt.func.FunctionProfunctor;
// Our original function: calculate string length
Function<String, Integer> stringLength = String::length;
Kind2<FunctionKind.Witness, String, Integer> lengthFunction = FUNCTION.widen(stringLength);
FunctionProfunctor profunctor = FunctionProfunctor.INSTANCE;
// LMAP: Adapt the input - now we can use integers!
Kind2<FunctionKind.Witness, Integer, Integer> intToLength =
profunctor.lmap(Object::toString, lengthFunction);
Function<Integer, Integer> intLengthFunc = FUNCTION.getFunction(intToLength);
System.out.println(intLengthFunc.apply(12345)); // Output: 5
// RMAP: Adapt the output - now we get formatted strings!
Kind2<FunctionKind.Witness, String, String> lengthToString =
profunctor.rmap(len -> "Length: " + len, lengthFunction);
Function<String, String> lengthStringFunc = FUNCTION.getFunction(lengthToString);
System.out.println(lengthStringFunc.apply("Hello")); // Output: "Length: 5"
// DIMAP: Adapt both sides simultaneously
Kind2<FunctionKind.Witness, Integer, String> fullTransform =
profunctor.dimap(
Object::toString, // int -> string
len -> "Result: " + len, // int -> string
lengthFunction);
Function<Integer, String> fullFunc = FUNCTION.getFunction(fullTransform);
System.out.println(fullFunc.apply(42)); // Output: "Result: 2"
Why Profunctors Matter
Profunctors excel at creating adaptable data transformation pipelines. They're particularly powerful for:
1. API Adapters 🔌
When you need to integrate with external systems that expect different data formats:
// Core business logic: validate a userLogin
Function<User, ValidationResult> validateUser = userLogin -> {
boolean isValid = userLogin.email().contains("@") && !userLogin.name().isEmpty();
return new ValidationResult(isValid, isValid ? "Valid userLogin" : "Invalid userLogin data");
};
// The API expects UserDto input and ApiResponse output
Kind2<FunctionKind.Witness, UserDto, ApiResponse<ValidationResult>> apiValidator =
profunctor.dimap(
// Convert UserDto -> User (contravariant)
dto -> new User(dto.fullName(), dto.emailAddress(),
LocalDate.parse(dto.birthDateString())),
// Convert ValidationResult -> ApiResponse (covariant)
result -> new ApiResponse<>(result, "OK", result.isValid() ? 200 : 400),
FUNCTION.widen(validateUser));
// Now our core logic works seamlessly with the external API format!
Function<UserDto, ApiResponse<ValidationResult>> apiFunc = FUNCTION.getFunction(apiValidator);
2. Validation Pipelines ✅
Build reusable validation logic that adapts to different input and output formats:
// Core validation: check if a number is positive
Function<Double, Boolean> isPositive = x -> x > 0;
// Adapt for string input with detailed error messages
Kind2<FunctionKind.Witness, String, String> stringValidator =
profunctor.dimap(
// Parse string to double
str -> {
try {
return Double.parseDouble(str);
} catch (NumberFormatException e) {
return -1.0; // Invalid marker
}
},
// Convert boolean to message
isValid -> isValid ? "✓ Valid positive number" : "✗ Not a positive number",
FUNCTION.widen(isPositive));
Function<String, String> validator = FUNCTION.getFunction(stringValidator);
System.out.println(validator.apply("42.5")); // "✓ Valid positive number"
System.out.println(validator.apply("-10")); // "✗ Not a positive number"
3. Data Transformation Chains 🔗
Chain multiple adaptations to build complex data processing pipelines:
// Core transformation: User -> UserDto
Function<User, UserDto> userToDto = userLogin ->
new UserDto(userLogin.name(), userLogin.email(),
userLogin.birthDate().format(DateTimeFormatter.ISO_LOCAL_DATE));
// Build a CSV-to-JSON pipeline
Kind2<FunctionKind.Witness, String, String> csvToJsonTransform =
profunctor.dimap(
csvParser, // String -> User (parse CSV)
dtoToJson, // UserDto -> String (serialise to JSON)
FUNCTION.widen(userToDto));
// Add error handling with another rmap
Kind2<FunctionKind.Witness, String, ApiResponse<String>> safeTransform =
profunctor.rmap(
jsonString -> {
if (jsonString.contains("INVALID")) {
return new ApiResponse<>("", "ERROR: Invalid input data", 400);
}
return new ApiResponse<>(jsonString, "SUCCESS", 200);
},
csvToJsonTransform);
Profunctor Laws
For a Profunctor to be lawful, it must satisfy two key properties:
- Identity:
dimap(identity, identity, p) == p - Composition:
dimap(f1 ∘ f2, g1 ∘ g2, p) == dimap(f2, g1, dimap(f1, g2, p))
These laws ensure that profunctor operations are predictable and composable—you can build complex transformations by combining simpler ones without unexpected behaviour.
When to Use Profunctors
Profunctors are ideal when you need to:
- Adapt existing functions to work with different input/output types
- Build flexible APIs that can handle multiple data formats
- Create reusable transformation pipelines that can be configured for different use cases
- Integrate with external systems without changing your core business logic
- Handle both sides of a computation (input preprocessing and output postprocessing)
The next time you find yourself writing similar functions that differ only in their input parsing or output formatting, consider whether a profunctor could help you write the logic once and adapt it as needed!
Bifunctor: Mapping Over Both Sides
- How to transform types with two covariant parameters independently or simultaneously
- The difference between sum types (Either, Validated) and product types (Tuple2, Writer)
- Using
bimap,first, andsecondoperations effectively - Transforming both error and success channels in validation scenarios
- Real-world applications in API design, data migration, and error handling
Whilst Functor lets us map over types with a single parameter like F<A>, many useful types have two parameters. Either<L, R>, Tuple2<A, B>, Validated<E, A>, and Writer<W, A> all carry two distinct types. The Bifunctor type class provides a uniform interface for transforming both parameters.
Unlike Profunctor, which is contravariant in the first parameter and covariant in the second (representing input/output relationships), Bifunctor is covariant in both parameters. This makes it perfect for types where both sides hold data that can be independently transformed.
New to variance terminology? See the Glossary for detailed explanations of covariant, contravariant, and invariant with Java-focused examples.
What is a Bifunctor?
A Bifunctor is a type class for any type constructor F<A, B> that supports mapping over both its type parameters. It provides three core operations:
bimap: Transform both type parameters simultaneouslyfirst: Transform only the first type parametersecond: Transform only the second type parameter
The interface for Bifunctor in hkj-api works with Kind2<F, A, B>:
@NullMarked
public interface Bifunctor<F> {
// Transform only the first parameter
default <A, B, C> Kind2<F, C, B> first(
Function<? super A, ? extends C> f,
Kind2<F, A, B> fab) {
return bimap(f, Function.identity(), fab);
}
// Transform only the second parameter
default <A, B, D> Kind2<F, A, D> second(
Function<? super B, ? extends D> g,
Kind2<F, A, B> fab) {
return bimap(Function.identity(), g, fab);
}
// Transform both parameters simultaneously
<A, B, C, D> Kind2<F, C, D> bimap(
Function<? super A, ? extends C> f,
Function<? super B, ? extends D> g,
Kind2<F, A, B> fab);
}
Sum Types vs Product Types
Understanding the distinction between sum types and product types is crucial to using bifunctors effectively.
Sum Types (Exclusive OR) 🔀
A sum type represents a choice between alternatives—you have either one value or another, but never both. In type theory, if type A has n possible values and type B has m possible values, then Either<A, B> has n + m possible values (hence "sum").
Examples in higher-kinded-j:
Either<L, R>: Holds either aLeftvalue (conventionally an error) or aRightvalue (conventionally a success)Validated<E, A>: Holds either anInvaliderror or aValidresult
When you use bimap on a sum type, only one of the two functions will actually execute, depending on which variant is present.
Product Types (Both AND) 🔗
A product type contains multiple values simultaneously—you have both the first value and the second value. In type theory, if type A has n possible values and type B has m possible values, then Tuple2<A, B> has n × m possible values (hence "product").
Examples in higher-kinded-j:
Tuple2<A, B>: Holds both a first value and a second valueWriter<W, A>: Holds both a log/output value and a computation result
When you use bimap on a product type, both functions execute because both values are always present.
The Bifunctor Laws
For a Bifunctor to be lawful, it must satisfy two fundamental properties:
-
Identity Law: Mapping with identity functions changes nothing
bifunctor.bimap(x -> x, y -> y, fab); // Must be equivalent to fab -
Composition Law: Mapping with composed functions is equivalent to mapping in sequence
Function<A, B> f1 = ...; Function<B, C> f2 = ...; Function<D, E> g1 = ...; Function<E, F> g2 = ...; // These must be equivalent: bifunctor.bimap(f2.compose(f1), g2.compose(g1), fab); bifunctor.bimap(f2, g2, bifunctor.bimap(f1, g1, fab));
These laws ensure that bifunctor operations are predictable, composable, and preserve the structure of your data.
Why is it useful?
Bifunctors provide a uniform interface for transforming dual-parameter types, which arise frequently in functional programming. Rather than learning different APIs for transforming Either, Tuple2, Validated, and Writer, you use the same operations everywhere.
Key Use Cases
- Error Handling: Transform both error and success channels simultaneously
- API Design: Normalise internal representations to external formats
- Data Migration: Convert both fields of legacy data structures
- Validation: Format both error messages and valid results
- Logging: Transform both the log output and the computation result
Example 1: Either – A Sum Type
Either<L, R> is the quintessential sum type. It holds either a Left (conventionally an error) or a Right (conventionally a success).
import static org.higherkindedj.hkt.either.EitherKindHelper.EITHER;
import org.higherkindedj.hkt.Bifunctor;
import org.higherkindedj.hkt.either.Either;
import org.higherkindedj.hkt.either.EitherBifunctor;
import org.higherkindedj.hkt.Kind2;
Bifunctor<EitherKind2.Witness> bifunctor = EitherBifunctor.INSTANCE;
// Success case: transform the Right channel
Either<String, Integer> success = Either.right(42);
Kind2<EitherKind2.Witness, String, String> formatted =
bifunctor.second(
n -> "Success: " + n,
EITHER.widen2(success));
System.out.println(EITHER.narrow2(formatted));
// Output: Right(Success: 42)
// Error case: transform the Left channel
Either<String, Integer> error = Either.left("FILE_NOT_FOUND");
Kind2<EitherKind2.Witness, String, Integer> enhanced =
bifunctor.first(
err -> "Error Code: " + err,
EITHER.widen2(error));
System.out.println(EITHER.narrow2(enhanced));
// Output: Left(Error Code: FILE_NOT_FOUND)
// Transform both channels with bimap
Either<String, Integer> either = Either.right(100);
Kind2<EitherKind2.Witness, Integer, String> both =
bifunctor.bimap(
String::length, // Left: string -> int (not executed here)
n -> "Value: " + n, // Right: int -> string (executed)
EITHER.widen2(either));
System.out.println(EITHER.narrow2(both));
// Output: Right(Value: 100)
Note: With Either, only one function in bimap executes because Either is a sum type—you have either Left or Right, never both.
Example 2: Tuple2 – A Product Type
Tuple2<A, B> is a product type that holds both a first value and a second value simultaneously.
import static org.higherkindedj.hkt.tuple.Tuple2KindHelper.TUPLE2;
import org.higherkindedj.hkt.Bifunctor;
import org.higherkindedj.hkt.tuple.Tuple2;
import org.higherkindedj.hkt.tuple.Tuple2Bifunctor;
Bifunctor<Tuple2Kind2.Witness> bifunctor = Tuple2Bifunctor.INSTANCE;
// A tuple representing (name, age)
Tuple2<String, Integer> person = new Tuple2<>("Alice", 30);
// Transform only the first element
Kind2<Tuple2Kind2.Witness, Integer, Integer> nameLength =
bifunctor.first(String::length, TUPLE2.widen2(person));
System.out.println(TUPLE2.narrow2(nameLength));
// Output: Tuple2(5, 30)
// Transform only the second element
Kind2<Tuple2Kind2.Witness, String, String> ageFormatted =
bifunctor.second(age -> age + " years", TUPLE2.widen2(person));
System.out.println(TUPLE2.narrow2(ageFormatted));
// Output: Tuple2(Alice, 30 years)
// Transform both simultaneously with bimap
Kind2<Tuple2Kind2.Witness, String, String> formatted =
bifunctor.bimap(
name -> "Name: " + name, // First: executed
age -> "Age: " + age, // Second: executed
TUPLE2.widen2(person));
System.out.println(TUPLE2.narrow2(formatted));
// Output: Tuple2(Name: Alice, Age: 30)
Note: With Tuple2, both functions in bimap execute because Tuple2 is a product type—both values are always present.
Example 3: Validated – Error Accumulation
Validated<E, A> is a sum type designed for validation scenarios where you need to accumulate errors.
import static org.higherkindedj.hkt.validated.ValidatedKindHelper.VALIDATED;
import org.higherkindedj.hkt.Bifunctor;
import org.higherkindedj.hkt.validated.Validated;
import org.higherkindedj.hkt.validated.ValidatedBifunctor;
import java.util.List;
Bifunctor<ValidatedKind2.Witness> bifunctor = ValidatedBifunctor.INSTANCE;
// Valid case
Validated<List<String>, Integer> valid = Validated.valid(100);
Kind2<ValidatedKind2.Witness, List<String>, String> transformedValid =
bifunctor.second(n -> "Score: " + n, VALIDATED.widen2(valid));
System.out.println(VALIDATED.narrow2(transformedValid));
// Output: Valid(Score: 100)
// Invalid case with multiple errors
Validated<List<String>, Integer> invalid =
Validated.invalid(List.of("TOO_SMALL", "OUT_OF_RANGE"));
// Transform errors to be more user-friendly
Kind2<ValidatedKind2.Witness, String, Integer> userFriendly =
bifunctor.first(
errors -> "Validation failed: " + String.join(", ", errors),
VALIDATED.widen2(invalid));
System.out.println(VALIDATED.narrow2(userFriendly));
// Output: Invalid(Validation failed: TOO_SMALL, OUT_OF_RANGE)
Example 4: Writer – Logging with Computation
Writer<W, A> is a product type that holds both a log value and a computation result.
import static org.higherkindedj.hkt.writer.WriterKindHelper.WRITER;
import org.higherkindedj.hkt.Bifunctor;
import org.higherkindedj.hkt.writer.Writer;
import org.higherkindedj.hkt.writer.WriterBifunctor;
Bifunctor<WriterKind2.Witness> bifunctor = WriterBifunctor.INSTANCE;
// A Writer with a log and a result
Writer<String, Integer> computation = new Writer<>("Calculated: ", 42);
// Transform the log channel
Kind2<WriterKind2.Witness, String, Integer> uppercaseLog =
bifunctor.first(String::toUpperCase, WRITER.widen2(computation));
System.out.println(WRITER.narrow2(uppercaseLog));
// Output: Writer(CALCULATED: , 42)
// Transform both log and result
Kind2<WriterKind2.Witness, List<String>, String> structured =
bifunctor.bimap(
log -> List.of("[LOG]", log), // Wrap log in structured format
value -> "Result: " + value, // Format the result
WRITER.widen2(computation));
System.out.println(WRITER.narrow2(structured));
// Output: Writer([LOG], Calculated: , Result: 42)
Example 5: Const – A Phantom Type Bifunctor
Const<C, A> is a unique bifunctor where the second type parameter is phantom (not stored at runtime), making it perfect for fold operations, getters in lens libraries, and data extraction patterns.
import static org.higherkindedj.hkt.constant.ConstKindHelper.CONST;
import org.higherkindedj.hkt.Bifunctor;
import org.higherkindedj.hkt.constant.Const;
import org.higherkindedj.hkt.constant.ConstBifunctor;
Bifunctor<ConstKind2.Witness> bifunctor = ConstBifunctor.INSTANCE;
// A Const holding a count, with String as the phantom type
Const<Integer, String> count = new Const<>(42);
System.out.println("Original: " + count.value());
// Output: 42
// Transform the constant value with first()
Kind2<ConstKind2.Witness, String, String> transformed =
bifunctor.first(
n -> "Count: " + n, // Transforms the constant: 42 -> "Count: 42"
CONST.widen2(count));
System.out.println(CONST.narrow2(transformed).value());
// Output: "Count: 42"
// Transform ONLY the phantom type with second()
Kind2<ConstKind2.Witness, Integer, Double> phantomChanged =
bifunctor.second(
s -> s.length() * 2.0, // This defines the phantom type transformation
CONST.widen2(count));
System.out.println(CONST.narrow2(phantomChanged).value());
// Output: 42 (UNCHANGED!)
// Use bimap() - but only first() affects the constant
Kind2<ConstKind2.Witness, String, Boolean> both =
bifunctor.bimap(
n -> "#" + n, // Transforms constant: 42 -> "#42"
s -> s.isEmpty(), // Phantom type transformation only
CONST.widen2(count));
System.out.println(CONST.narrow2(both).value());
// Output: "#42"
Note: With Const, the second function in bimap never affects the constant value because the second type parameter is phantom. This property makes Const ideal for folds that accumulate a single value whilst traversing a structure, and for implementing getters in van Laarhoven lens patterns.
For more on Const and its applications in folds and lens patterns, see the Const Type documentation.
Real-World Scenario: API Response Transformation
One of the most common uses of bifunctors is transforming internal data representations to external API formats.
// Internal representation uses simple error codes and domain objects
Either<String, UserData> internalResult = Either.left("USER_NOT_FOUND");
// External API requires structured error objects and formatted responses
Function<String, ApiError> toApiError =
code -> new ApiError(code, "Error occurred", 404);
Function<UserData, ApiResponse> toApiResponse =
user -> new ApiResponse(user.name(), user.email(), 200);
Bifunctor<EitherKind2.Witness> bifunctor = EitherBifunctor.INSTANCE;
Kind2<EitherKind2.Witness, ApiError, ApiResponse> apiResult =
bifunctor.bimap(
toApiError, // Transform internal error to API error format
toApiResponse, // Transform internal data to API response format
EITHER.widen2(internalResult));
// Result: Left(ApiError(USER_NOT_FOUND, Error occurred, 404))
This approach keeps your domain logic clean whilst providing flexible adaptation to external requirements.
Bifunctor vs Profunctor
Whilst both type classes work with dual-parameter types, they serve different purposes:
| Feature | Bifunctor | Profunctor |
|---|---|---|
| First parameter | Covariant (output) | Contravariant (input) |
| Second parameter | Covariant (output) | Covariant (output) |
| Typical use | Data structures with two outputs | Functions and transformations |
| Examples | Either<L, R>, Tuple2<A, B> | Function<A, B>, optics |
| Use case | Transform both "sides" of data | Adapt input and output of pipelines |
Use Bifunctor when: Both parameters represent data you want to transform (errors and results, first and second elements).
Use Profunctor when: The first parameter represents input (contravariant) and the second represents output (covariant), like in functions.
When to Use Bifunctor
Bifunctors are ideal when you need to:
- Normalise API responses by transforming both error and success formats
- Migrate data schemas by transforming both fields of legacy structures
- Format validation results by enhancing both error messages and valid values
- Process paired data like tuples, logs with results, or any product type
- Handle sum types uniformly by providing transformations for all variants
The power of bifunctors lies in their ability to abstract over the dual-parameter structure whilst preserving the semantics (sum vs product) of the underlying type.
Summary
- Bifunctor provides
bimap,first, andsecondfor transforming dual-parameter types - Sum types (Either, Validated) execute only one function based on which variant is present
- Product types (Tuple2, Writer) execute both functions since both values are present
- Use cases include API design, validation, data migration, and error handling
- Differs from Profunctor by being covariant in both parameters rather than contravariant/covariant
Understanding bifunctors empowers you to write generic, reusable transformation logic that works uniformly across diverse dual-parameter types.
For-Comprehensions
- How to transform nested
flatMapchains into readable, sequential code - The four types of operations: generators (
.from()), bindings (.let()), guards (.when()), and projections (.yield()) - Building complex workflows with StateT and other monad transformers
- Converting "pyramid of doom" code into clean, imperative-style scripts
- Real-world examples from simple Maybe operations to complex state management
Endless nested callbacks and unreadable chains of flatMap calls can be tiresome. The higher-kinded-j library brings the elegance and power of Scala-style for-comprehensions to Java, allowing you to write complex asynchronous and sequential logic in a way that is clean, declarative, and easy to follow.
Let's see how to transform "callback hell" into a readable, sequential script.
The "Pyramid of Doom" Problem
In functional programming, monads are a powerful tool for sequencing operations, especially those with a context like Optional, List, or CompletableFuture. However, chaining these operations with flatMap can quickly become hard to read.
Consider combining three Maybe values:
// The "nested" way
Kind<MaybeKind.Witness, Integer> result = maybeMonad.flatMap(a ->
maybeMonad.flatMap(b ->
maybeMonad.map(c -> a + b + c, maybeC),
maybeB),
maybeA);
This code works, but the logic is buried inside nested lambdas. The intent—to simply get values from maybeA, maybeB, and maybeC and add them—is obscured. This is often called the "pyramid of doom."
For A Fluent, Sequential Builder
The For comprehension builder provides a much more intuitive way to write the same logic. It lets you express the sequence of operations as if they were simple, imperative steps.
Here’s the same example rewritten with the For builder:
import static org.higherkindedj.hkt.maybe.MaybeKindHelper.MAYBE;
import org.higherkindedj.hkt.expression.For;
// ... other imports
var maybeMonad = MaybeMonad.INSTANCE;
var maybeA = MAYBE.just(5);
var maybeB = MAYBE.just(10);
var maybeC = MAYBE.just(20);
// The clean, sequential way
var result = For.from(maybeMonad, maybeA) // Get a from maybeA
.from(a -> maybeB) // Then, get b from maybeB
.from(t -> maybeC) // Then, get c from maybeC
.yield((a, b, c) -> a + b + c); // Finally, combine them
System.out.println(MAYBE.narrow(result)); // Prints: Just(35)
This version is flat, readable, and directly expresses the intended sequence of operations. The For builder automatically handles the flatMap and map calls behind the scenes.
Core Operations of the For Builder
A for-comprehension is built by chaining four types of operations:
1. Generators: .from()
A generator is the workhorse of the comprehension. It takes a value from a previous step, uses it to produce a new monadic value (like another Maybe or List), and extracts the result for the next step. This is a direct equivalent of flatMap.
Each .from() adds a new variable to the scope of the comprehension.
// Generates all combinations of userLogin IDs and roles
var userRoles = For.from(listMonad, LIST.widen(List.of("userLogin-1", "userLogin-2"))) // a: "userLogin-1", "userLogin-2"
.from(a -> LIST.widen(List.of("viewer", "editor"))) // b: "viewer", "editor"
.yield((a, b) -> a + " is a " + b);
// Result: ["userLogin-1 is a viewer", "userLogin-1 is a editor", "userLogin-2 is a viewer", "userLogin-2 is a editor"]
2. Value Bindings: .let()
A .let() binding allows you to compute a pure, simple value from the results you've gathered so far and add it to the scope. It does not involve a monad. This is equivalent to a map operation that carries the new value forward.
var idMonad = IdMonad.instance();
var result = For.from(idMonad, Id.of(10)) // a = 10
.let(a -> a * 2) // b = 20 (a pure calculation)
.yield((a, b) -> "Value: " + a + ", Doubled: " + b);
// Result: "Value: 10, Doubled: 20"
System.out.println(ID.unwrap(result));
3. Guards: .when()
For monads that can represent failure or emptiness (like List, Maybe, or Optional), you can use .when() to filter results. If the condition is false, the current computational path is stopped by returning the monad's "zero" value (e.g., an empty list or Maybe.nothing()).
This feature requires a
MonadZeroinstance. See theMonadZerodocumentation for more details.
var evens = For.from(listMonad, LIST.widen(List.of(1, 2, 3, 4, 5, 6)))
.when(i -> i % 2 == 0) // Guard: only keep even numbers
.yield(i -> i);
// Result: [2, 4, 6]
4. Projection: .yield()
Every comprehension ends with .yield(). This is the final map operation where you take all the values you've gathered from the generators and bindings and produce your final result. You can access the bound values as individual lambda parameters or as a single Tuple.
Turn the power up: StateT Example
The true power of for-comprehensions becomes apparent when working with complex structures like monad transformers. A StateT over Optional represents a stateful computation that can fail. Writing this with nested flatMap calls would be extremely complex. With the For builder, it becomes a simple, readable script.
import static org.higherkindedj.hkt.optional.OptionalKindHelper.OPTIONAL;
import static org.higherkindedj.hkt.state_t.StateTKindHelper.STATE_T;
// ... other imports
private static void stateTExample() {
final var optionalMonad = OptionalMonad.INSTANCE;
final var stateTMonad = StateTMonad.<Integer, OptionalKind.Witness>instance(optionalMonad);
// Helper: adds a value to the state (an integer)
final Function<Integer, Kind<StateTKind.Witness<Integer, OptionalKind.Witness>, Unit>> add =
n -> StateT.create(s -> optionalMonad.of(StateTuple.of(s + n, Unit.INSTANCE)), optionalMonad);
// Helper: gets the current state as the value
final var get = StateT.<Integer, OptionalKind.Witness, Integer>create(s -> optionalMonad.of(StateTuple.of(s, s)), optionalMonad);
// This workflow looks like a simple script, but it's a fully-typed, purely functional composition!
final var statefulComputation =
For.from(stateTMonad, add.apply(10)) // Add 10 to state
.from(a -> add.apply(5)) // Then, add 5 more
.from(b -> get) // Then, get the current state (15)
.let(t -> "The state is " + t._3()) // Compute a string from it
.yield((a, b, c, d) -> d + ", original value was " + c); // Produce the final string
// Run the computation with an initial state of 0
final var resultOptional = STATE_T.runStateT(statefulComputation, 0);
final Optional<StateTuple<Integer, String>> result = OPTIONAL.narrow(resultOptional);
result.ifPresent(res -> {
System.out.println("Final value: " + res.value());
System.out.println("Final state: " + res.state());
});
// Expected Output:
// Final value: The state is 15, original value was 15
// Final state: 15
}
In this example, Using the For comprehension really helps hide the complexity of threading the state (Integer) and handling potential failures (Optional), making the logic clear and maintainable.
For a more extensive example of using the full power of the For comprehension head over to the Order Workflow
Similarities to Scala
If you're familiar with Scala, you'll recognise the pattern. In Scala, a for-comprehension looks like this:
for {
a <- maybeA
b <- maybeB
if (a + b > 10)
c = a + b
} yield c * 2
This is built in syntactic sugar that the compiler translates into a series of flatMap, map, and withFilter calls.
The For builder in higher-kinded-j provides the same expressive power through a method-chaining API.
Supported Types

Higher-Kinded-J provides Higher-Kinded Type (HKT) simulation capabilities, allowing various Java types and custom types to be used with generic functional type classes like Functor, Applicative, Monad, and MonadError.
This is achieved by representing the application of a type constructor F to a type A as Kind<F_WITNESS, A>, where F_WITNESS is a special "witness" or phantom type unique to the type constructor F.
Key for Understanding Entries:
- Type: The Java type or custom type being simulated.
XxxKind<A>Interface: The specificKindinterface for this type (e.g.,OptionalKind<A>). It extendsKind<XxxKind.Witness, A>and usually contains the nestedfinal class Witness {}.- Witness Type
F_WITNESS: The phantom type used as the first parameter toKind(e.g.,OptionalKind.Witness). This is what parameterizes the type classes (e.g.,Monad<OptionalKind.Witness>). XxxKindHelperClass: Provideswidenandnarrowmethods.- For external types (like
java.util.List,java.util.Optional),widentypically creates an internalXxxHolderrecord which implementsXxxKind<A>, andnarrowextracts the Java type from this holder. - For library-defined types (
Id,IO,Maybe,Either,Validated,Try, monad transformers), the type itself directly implementsXxxKind<A>. This meanswidenperforms a null check and direct cast (zero overhead), andnarrowchecksinstanceofthe actual type and casts.
- For external types (like
- Type Class Instances: Concrete implementations of
Functor<F_WITNESS>,Monad<F_WITNESS>, etc.
1. Id<A> (Identity)
- Type Definition: A custom record (
Id) that directly wraps a valueA. It's the simplest monad. IdKind<A>Interface:Id<A>itself implementsIdKind<A>, andIdKind<A> extends Kind<IdKind.Witness, A>.- Witness Type
F_WITNESS:IdKind.Witness IdKindHelper:IdKindHelper(wrapcastsIdtoKind,unwrapcastsKindtoId;narrowis a convenience for unwrap).- Type Class Instances:
IdMonad(Monad<IdKind.Witness>)
- Notes:
Id.of(a)createsId(a).mapandflatMapoperate directly. Useful as a base for monad transformers and generic programming with no extra effects.Id<A>directly implementsIdKind<A>. - Usage: How to use the Identity Monad
2. java.util.List<A>
- Type Definition: Standard Java
java.util.List<A>. ListKind<A>Interface:ListKind<A>extendsKind<ListKind.Witness, A>.- Witness Type
F_WITNESS:ListKind.Witness ListKindHelper: Uses an internalListHolder<A>record that implementsListKind<A>to wrapjava.util.List<A>.- Type Class Instances:
ListFunctor(Functor<ListKind.Witness>)ListMonad(Monad<ListKind.Witness>)
- Notes: Standard list monad behaviour.
of(a)creates a singleton listList.of(a);of(null)results in an empty list. - Usage: How to use the List Monad
3. java.util.stream.Stream<A>
- Type Definition: Standard Java
java.util.stream.Stream<A>. StreamKind<A>Interface:StreamKind<A>extendsKind<StreamKind.Witness, A>.- Witness Type
F_WITNESS:StreamKind.Witness StreamKindHelper: Uses an internalStreamHolder<A>record that implementsStreamKind<A>to wrapjava.util.stream.Stream<A>. Provideswiden,narrow.- Type Class Instances:
StreamFunctor(Functor<StreamKind.Witness>)StreamApplicative(Applicative<StreamKind.Witness>)StreamMonad(MonadZero<StreamKind.Witness>)StreamTraverse(Traverse<StreamKind.Witness>,Foldable<StreamKind.Witness>)
- Notes: Lazy, potentially infinite sequences with single-use semantics - each Stream can only be consumed once. Attempting to reuse a consumed stream throws
IllegalStateException.of(a)creates singleton stream;of(null)creates empty stream.zero()returns empty stream. UseStreamOpsfor additional utility operations. - Usage: How to use the Stream Monad
4. Trampoline<A>
- Type Definition: Custom sealed interface (
Trampoline) implementing stack-safe recursion through trampolining. Provides three constructors:Done<A>(completed computation),More<A>(deferred computation), andFlatMap<A, B>(monadic sequencing). TrampolineKind<A>Interface:Trampoline<A>itself implementsTrampolineKind<A>, andTrampolineKind<A> extends Kind<TrampolineKind.Witness, A>.- Witness Type
F_WITNESS:TrampolineKind.Witness TrampolineKindHelper:widencastsTrampolinetoKind;narrowcastsKindtoTrampoline. Providesdone(value)for completed computations anddefer(supplier)for deferred evaluation.- Type Class Instances:
TrampolineFunctor(Functor<TrampolineKind.Witness>)TrampolineMonad(Monad<TrampolineKind.Witness>)
- Notes: Enables stack-safe tail recursion by converting recursive calls into iterative data structure processing, preventing
StackOverflowErrorin deeply recursive computations (verified with 100,000+ iterations).done(value)creates an already evaluated result;defer(supplier)defers computation. Therun()method executes the trampoline iteratively using an explicit stack. Essential for recursive algorithms (factorial, Fibonacci, tree traversals) and providesTrampolineUtilsfor guaranteed stack-safe applicative operations. - Usage: How to use the Trampoline Monad
5. Free<F, A>
- Type Definition: Custom sealed interface (
Free) representing programmes as data structures that can be interpreted in different ways. Provides three constructors:Pure<F,A>(terminal value),Suspend<F,A>(suspended computation), andFlatMapped<F,X,A>(monadic sequencing). FreeKind<F, A>Interface:Free<F,A>itself implementsFreeKind<F,A>, andFreeKind<F,A> extends Kind<FreeKind.Witness<F>, A>.- Witness Type
F_WITNESS:FreeKind.Witness<F>(whereFis the instruction set functor) FreeKindHelper:widencastsFreetoKind;narrowcastsKindtoFree. Providespure(value),suspend(computation),liftF(fa, functor).- Type Class Instances:
FreeFunctor<F>(Functor<FreeKind.Witness<F>>)FreeMonad<F>(Monad<FreeKind.Witness<F>>)
- Notes: Enables building domain-specific languages (DSLs) as composable data structures. Programmes are interpreted via
foldMapwith natural transformations, allowing multiple interpreters (IO, Test, Optimisation, etc.). Stack-safe execution using Higher-Kinded-J'sTrampolinemonad internally, demonstrating library composability (verified with 10,000+ operations). Essential for separating programme description from execution, enabling testability and alternative interpretations. ProvidesliftFto lift functor values into Free,mapandflatMapfor composition, andfoldMapfor interpretation. Useful for building testable workflows, query languages, and effect systems where the same programme needs different execution strategies. - Usage: How to use the Free Monad
6. java.util.Optional<A>
- Type Definition: Standard Java
java.util.Optional<A>. OptionalKind<A>Interface:OptionalKind<A>extendsKind<OptionalKind.Witness, A>.- Witness Type
F_WITNESS:OptionalKind.Witness OptionalKindHelper: Uses an internalOptionalHolder<A>record that implementsOptionalKind<A>to wrapjava.util.Optional<A>.- Type Class Instances:
OptionalFunctor(Functor<OptionalKind.Witness>)OptionalMonad(MonadError<OptionalKind.Witness, Unit>)
- Notes:
Optional.empty()is the error state.raiseError(Unit.INSTANCE)createsOptional.empty().of(value)usesOptional.ofNullable(value). - Usage: How to use the Optional Monad
7. Maybe<A>
- Type Definition: Custom sealed interface (
Maybe) withJust<A>(non-null) andNothing<A>implementations. MaybeKind<A>Interface:Just<T>andNothing<T>directly implementMaybeKind<T>, which extendsKind<MaybeKind.Witness, T>.- Witness Type
F_WITNESS:MaybeKind.Witness MaybeKindHelper:widenperforms null check and castsMaybetoKind(zero overhead);narrowchecksinstanceof Maybeand casts. Providesjust(value),nothing(),fromNullable(value).- Type Class Instances:
MaybeFunctor(Functor<MaybeKind.Witness>)MaybeMonad(MonadError<MaybeKind.Witness, Unit>)
- Notes:
Nothingis the error state;raiseError(Unit.INSTANCE) createsNothing.Maybe.just(value)requires non-null.MaybeMonad.of(value)usesMaybe.fromNullable(). - Usage: How to use the Maybe Monad
8. Either<L, R>
- Type Definition: Custom sealed interface (
Either) withLeft<L,R>andRight<L,R>records. EitherKind<L, R>Interface:Either.Left<L,R>andEither.Right<L,R>directly implementEitherKind<L,R>(andEitherKind2<L,R>for bifunctor operations), which extendsKind<EitherKind.Witness<L>, R>.- Witness Type
F_WITNESS:EitherKind.Witness<L>(Error typeLis fixed for the witness). EitherKindHelper:widenperforms null check and castsEithertoKind(zero overhead);narrowchecksinstanceof Eitherand casts. Providesleft(l),right(r).- Type Class Instances:
EitherFunctor<L>(Functor<EitherKind.Witness<L>>)EitherMonad<L>(MonadError<EitherKind.Witness<L>, L>)
- Notes: Right-biased.
Left(l)is the error state.of(r)createsRight(r). - Usage: How to use the Either Monad
9. Try<A>
- Type Definition: Custom sealed interface (
Try) withSuccess<A>andFailure<A>(wrappingThrowable). TryKind<A>Interface:Try<A>itself implementsTryKind<A>, andTryKind<A> extends Kind<TryKind.Witness, A>.- Witness Type
F_WITNESS:TryKind.Witness TryKindHelper:wrapcastsTrytoKind;unwrapcastsKindtoTry. Providessuccess(value),failure(throwable),tryOf(supplier).- Type Class Instances:
TryFunctor(Functor<TryKind.Witness>)TryApplicative(Applicative<TryKind.Witness>)TryMonad(MonadError<TryKind.Witness, Throwable>)
- Notes:
Failure(t)is the error state.of(v)createsSuccess(v). - Usage: How to use the Try Monad
10. java.util.concurrent.CompletableFuture<A>
- Type Definition: Standard Java
java.util.concurrent.CompletableFuture<A>. CompletableFutureKind<A>Interface:CompletableFutureKind<A>extendsKind<CompletableFutureKind.Witness, A>.- Witness Type
F_WITNESS:CompletableFutureKind.Witness CompletableFutureKindHelper: Uses an internalCompletableFutureHolder<A>record. Provideswrap,unwrap,join.- Type Class Instances:
CompletableFutureFunctor(Functor<CompletableFutureKind.Witness>)CompletableFutureApplicative(Applicative<CompletableFutureKind.Witness>)CompletableFutureMonad(Monad<CompletableFutureKind.Witness>)CompletableFutureMonad(MonadError<CompletableFutureKind.Witness, Throwable>)
- Notes: Represents asynchronous computations. A failed future is the error state.
of(v)createsCompletableFuture.completedFuture(v). - Usage: How to use the CompletableFuture Monad
11. IO<A>
- Type Definition: Custom interface (
IO) representing a deferred, potentially side-effecting computation. IOKind<A>Interface:IO<A>directly extendsIOKind<A>, which extendsKind<IOKind.Witness, A>.- Witness Type
F_WITNESS:IOKind.Witness IOKindHelper:widenperforms null check and returns theIOdirectly asKind(zero overhead);narrowchecksinstanceof IOand casts. Providesdelay(supplier),unsafeRunSync(kind).- Type Class Instances:
IOFunctor(Functor<IOKind.Witness>)IOApplicative(Applicative<IOKind.Witness>)IOMonad(Monad<IOKind.Witness>)
- Notes: Evaluation is deferred until
unsafeRunSync. Exceptions during execution are generally unhandled byIOMonaditself unless caught within the IO's definition. - Usage: How to use the IO Monad
12. Lazy<A>
- Type Definition: Custom class (
Lazy) for deferred computation with memoization. LazyKind<A>Interface:Lazy<A>itself implementsLazyKind<A>, andLazyKind<A> extends Kind<LazyKind.Witness, A>.- Witness Type
F_WITNESS:LazyKind.Witness LazyKindHelper:wrapcastsLazytoKind;unwrapcastsKindtoLazy. Providesdefer(supplier),now(value),force(kind).- Type Class Instances:
LazyMonad(Monad<LazyKind.Witness>)
- Notes: Result or exception is memoized.
of(a)creates an already evaluatedLazy.now(a). - Usage: How to use the Lazy Monad
13. Reader<R_ENV, A>
- Type Definition: Custom functional interface (
Reader) wrappingFunction<R_ENV, A>. ReaderKind<R_ENV, A>Interface:Reader<R_ENV,A>itself implementsReaderKind<R_ENV,A>, andReaderKind<R_ENV,A> extends Kind<ReaderKind.Witness<R_ENV>, A>.- Witness Type
F_WITNESS:ReaderKind.Witness<R_ENV>(Environment typeR_ENVis fixed). ReaderKindHelper:wrapcastsReadertoKind;unwrapcastsKindtoReader. Providesreader(func),ask(),constant(value),runReader(kind, env).- Type Class Instances:
ReaderFunctor<R_ENV>(Functor<ReaderKind.Witness<R_ENV>>)ReaderApplicative<R_ENV>(Applicative<ReaderKind.Witness<R_ENV>>)ReaderMonad<R_ENV>(Monad<ReaderKind.Witness<R_ENV>>)
- Notes:
of(a)creates aReaderthat ignores the environment and returnsa. - Usage: How to use the Reader Monad
14. State<S, A>
- Type Definition: Custom functional interface (
State) wrappingFunction<S, StateTuple<S, A>>. StateKind<S,A>Interface:State<S,A>itself implementsStateKind<S,A>, andStateKind<S,A> extends Kind<StateKind.Witness<S>, A>.- Witness Type
F_WITNESS:StateKind.Witness<S>(State typeSis fixed). StateKindHelper:wrapcastsStatetoKind;unwrapcastsKindtoState. Providespure(value),get(),set(state),modify(func),inspect(func),runState(kind, initialState), etc.- Type Class Instances:
StateFunctor<S>(Functor<StateKind.Witness<S>>)StateApplicative<S>(Applicative<StateKind.Witness<S>>)StateMonad<S>(Monad<StateKind.Witness<S>>)
- Notes:
of(a)(pure) returnsawithout changing state. - Usage: How to use the State Monad
15. Writer<W, A>
- Type Definition: Custom record (
Writer) holding(W log, A value). RequiresMonoid<W>. WriterKind<W, A>Interface:Writer<W,A>itself implementsWriterKind<W,A>, andWriterKind<W,A> extends Kind<WriterKind.Witness<W>, A>.- Witness Type
F_WITNESS:WriterKind.Witness<W>(Log typeWand itsMonoidare fixed). WriterKindHelper:wrapcastsWritertoKind;unwrapcastsKindtoWriter. Providesvalue(monoid, val),tell(monoid, log),runWriter(kind), etc.- Type Class Instances: (Requires
Monoid<W>for Applicative/Monad)WriterFunctor<W>(Functor<WriterKind.Witness<W>>)WriterApplicative<W>(Applicative<WriterKind.Witness<W>>)WriterMonad<W>(Monad<WriterKind.Witness<W>>)
- Notes:
of(a)(value) producesawith an empty log (fromMonoid.empty()). - Usage: How to use the Writer Monad
16. Validated<E, A>
- Type Definition: Custom sealed interface (
Validated) withValid<E, A>(holdingA) andInvalid<E, A>(holdingE) implementations. ValidatedKind<E, A>Interface: Defines the HKT structure (ValidatedKind) forValidated<E,A>. It extendsKind<ValidatedKind.Witness<E>, A>. ConcreteValid<E,A>andInvalid<E,A>instances are cast to this kind byValidatedKindHelper.- Witness Type
F_WITNESS:ValidatedKind.Witness<E>(Error typeEis fixed for the HKT witness). ValidatedKindHelperClass: (ValidatedKindHelper).widencastsValidated<E,A>(specificallyValidorInvalidinstances) toKind<ValidatedKind.Witness<E>, A>.narrowcastsKindback toValidated<E,A>. Provides static factory methodsvalid(value)andinvalid(error)that return the Kind-wrapped type.- Type Class Instances: (Error type
Eis fixed for the monad instance)ValidatedMonad<E>(MonadError<ValidatedKind.Witness<E>, E>). This also providesMonad,Functor, andApplicativebehaviour.
- Notes:
Validatedis right-biased, meaning operations likemapandflatMapapply to theValidcase and propagateInvaliduntouched.ValidatedMonad.of(a)creates aValid(a). As aMonadError,ValidatedMonadprovidesraiseError(error)to create anInvalid(error)andhandleErrorWith(kind, handler)for standardised error recovery. Theapmethod is also right-biased and does not accumulate errors from multipleInvalids in the typical applicative sense; it propagates the firstInvalidencountered or anInvalidfunction. - Usage: How to use the Validated Monad
17. Const<C, A>
- Type Definition: Custom record (
Const) holding a constant value of typeCwhilst treatingAas a phantom type parameter (present in the type signature but not stored). ConstKind2<C, A>Interface: (ConstKind2<C, A>) extendsKind2<ConstKind2.Witness, C, A>. This interface allowsConstto be used with bifunctor operations.- Witness Type
F_WITNESS:ConstKind2.Witness(used for bifunctor type class instances). ConstKindHelperClass: (ConstKindHelper). Provideswiden2to castConst<C, A>toKind2<ConstKind2.Witness, C, A>andnarrow2to cast back. Uses an internalConstKind2Holder<C, A>record that implementsConstKind2<C, A>.- Type Class Instances: (Only bifunctor, no monad instance as mapping the phantom type has no computational effect)
ConstBifunctor(Bifunctor<ConstKind2.Witness>). This instance providesfirst(transforms the constant value),second(changes only the phantom type), andbimap(combines both, though onlyfirstaffects the constant value).
- Notes: The second type parameter
Ais phantom—it exists only in the type signature and has no runtime representation. CallingmapSecondorsecondpreserves the constant value whilst changing the phantom type in the signature. This makesConstparticularly useful for fold implementations (accumulating a single value), getter patterns in lens libraries (van Laarhoven lenses), and data extraction from structures without transformation. The mapper function insecondis applied tonullfor exception propagation, so use null-safe mappers. Similar toConstin Scala's Cats and Scalaz libraries. - Usage: How to use the Const Type
The CompletableFutureMonad:
Asynchronous Computations with CompletableFuture
- How to compose asynchronous operations functionally
- Using MonadError capabilities for async error handling and recovery
- Building non-blocking workflows with
map,flatMap, andhandleErrorWith - Integration with EitherT for combining async operations with typed errors
- Real-world patterns for resilient microservice communication
Java's java.util.concurrent.CompletableFuture<T> is a powerful tool for asynchronous programming. The higher-kinded-j library provides a way to treat CompletableFuture as a monadic context using the HKT simulation. This allows developers to compose asynchronous operations and handle their potential failures (Throwable) in a more functional and generic style, leveraging type classes like Functor, Applicative, Monad, and crucially, MonadError.
Higher-Kinded Bridge for CompletableFuture
TypeClasses
The simulation for CompletableFuture involves these components:
CompletableFuture<A>: The standard Java class representing an asynchronous computation that will eventually result in a value of typeAor fail with an exception (aThrowable).CompletableFutureKind<A>: The HKT marker interface (Kind<CompletableFutureKind.Witness, A>) forCompletableFuture. This allowsCompletableFutureto be used generically with type classes. The witness type isCompletableFutureKind.Witness.CompletableFutureKindHelper: The utility class for bridging betweenCompletableFuture<A>andCompletableFutureKind<A>. Key methods:widen(CompletableFuture<A>): Wraps a standardCompletableFutureinto itsKindrepresentation.narrow(Kind<CompletableFutureKind.Witness, A>): Unwraps theKindback to the concreteCompletableFuture. ThrowsKindUnwrapExceptionif the input Kind is invalid.join(Kind<CompletableFutureKind.Witness, A>): A convenience method to unwrap theKindand then block (join()) on the underlyingCompletableFutureto get its result. It re-throws runtime exceptions and errors directly but wraps checked exceptions inCompletionException. Use primarily for testing or at the very end of an application where blocking is acceptable.
CompletableFutureFunctor: ImplementsFunctor<CompletableFutureKind.Witness>. Providesmap, which corresponds toCompletableFuture.thenApply().CompletableFutureApplicative: ExtendsFunctor, implementsApplicative<CompletableFutureKind.Witness>.of(A value): Creates an already successfully completedCompletableFutureKindusingCompletableFuture.completedFuture(value).ap(Kind<F, Function<A,B>>, Kind<F, A>): Corresponds toCompletableFuture.thenCombine(), applying a function from one future to the value of another when both complete.
CompletableFutureMonad: ExtendsApplicative, implementsMonad<CompletableFutureKind.Witness>.flatMap(Function<A, Kind<F, B>>, Kind<F, A>): Corresponds toCompletableFuture.thenCompose(), sequencing asynchronous operations where one depends on the result of the previous one.
CompletableFutureMonad: ExtendsMonad, implementsMonadError<CompletableFutureKind.Witness, Throwable>. This is often the most useful instance to work with.raiseError(Throwable error): Creates an already exceptionally completedCompletableFutureKindusingCompletableFuture.failedFuture(error).handleErrorWith(Kind<F, A>, Function<Throwable, Kind<F, A>>): Corresponds toCompletableFuture.exceptionallyCompose(), allowing asynchronous recovery from failures.
Purpose and Usage
- Functional Composition of Async Ops: Use
map,ap, andflatMap(via the type class instances) to build complex asynchronous workflows in a declarative style, similar to how you'd compose synchronous operations withOptionalorList. - Unified Error Handling: Treat asynchronous failures (
Throwable) consistently usingMonadErroroperations (raiseError,handleErrorWith). This allows integrating error handling directly into the composition chain. - HKT Integration: Enables writing generic code that can operate on
CompletableFuturealongside other simulated monadic types (likeOptional,Either,IO) by programming against theKind<F, A>interface and type classes. This is powerfully demonstrated when usingCompletableFutureKindas the outer monadFin theEitherTtransformer (see Order Example Walkthrough).
Examples
public void createExample() {
// Get the MonadError instance
CompletableFutureMonad futureMonad = CompletableFutureMonad.INSTANCE;
// --- Using of() ---
// Creates a Kind wrapping an already completed future
Kind<CompletableFutureKind.Witness, String> successKind = futureMonad.of("Success!");
// --- Using raiseError() ---
// Creates a Kind wrapping an already failed future
RuntimeException error = new RuntimeException("Something went wrong");
Kind<CompletableFutureKind.Witness, String> failureKind = futureMonad.raiseError(error);
// --- Wrapping existing CompletableFutures ---
CompletableFuture<Integer> existingFuture = CompletableFuture.supplyAsync(() -> {
try {
TimeUnit.MILLISECONDS.sleep(20);
} catch (InterruptedException e) { /* ignore */ }
return 123;
});
Kind<CompletableFutureKind.Witness, Integer> wrappedExisting = FUTURE.widen(existingFuture);
CompletableFuture<Integer> failedExisting = new CompletableFuture<>();
failedExisting.completeExceptionally(new IllegalArgumentException("Bad input"));
Kind<CompletableFutureKind.Witness, Integer> wrappedFailed = FUTURE.widen(failedExisting);
// You typically don't interact with 'unwrap' unless needed at boundaries or for helper methods like 'join'.
CompletableFuture<String> unwrappedSuccess = FUTURE.narrow(successKind);
CompletableFuture<String> unwrappedFailure = FUTURE.narrow(failureKind);
}
These examples show how to use the type class instance (futureMonad) to apply operations.
public void monadExample() {
// Get the MonadError instance
CompletableFutureMonad futureMonad = CompletableFutureMonad.INSTANCE;
// --- map (thenApply) ---
Kind<CompletableFutureKind.Witness, Integer> initialValueKind = futureMonad.of(10);
Kind<CompletableFutureKind.Witness, String> mappedKind = futureMonad.map(
value -> "Result: " + value,
initialValueKind
);
// Join for testing/demonstration
System.out.println("Map Result: " + FUTURE.join(mappedKind)); // Output: Result: 10
// --- flatMap (thenCompose) ---
// Function A -> Kind<F, B>
Function<String, Kind<CompletableFutureKind.Witness, String>> asyncStep2 =
input -> FUTURE.widen(
CompletableFuture.supplyAsync(() -> input + " -> Step2 Done")
);
Kind<CompletableFutureKind.Witness, String> flatMappedKind = futureMonad.flatMap(
asyncStep2,
mappedKind // Result from previous map step ("Result: 10")
);
System.out.println("FlatMap Result: " + FUTURE.join(flatMappedKind)); // Output: Result: 10 -> Step2 Done
// --- ap (thenCombine) ---
Kind<CompletableFutureKind.Witness, Function<Integer, String>> funcKind = futureMonad.of(i -> "FuncResult:" + i);
Kind<CompletableFutureKind.Witness, Integer> valKind = futureMonad.of(25);
Kind<CompletableFutureKind.Witness, String> apResult = futureMonad.ap(funcKind, valKind);
System.out.println("Ap Result: " + FUTURE.join(apResult)); // Output: FuncResult:25
// --- mapN ---
Kind<CompletableFutureKind.Witness, Integer> f1 = futureMonad.of(5);
Kind<CompletableFutureKind.Witness, String> f2 = futureMonad.of("abc");
BiFunction<Integer, String, String> combine = (i, s) -> s + i;
Kind<CompletableFutureKind.Witness, String> map2Result = futureMonad.map2(f1, f2, combine);
System.out.println("Map2 Result: " + FUTURE.join(map2Result)); // Output: abc5
}
This is where CompletableFutureMonad shines, providing functional error recovery.
public void errorHandlingExample(){
// Get the MonadError instance
CompletableFutureMonad futureMonad = CompletableFutureMonad.INSTANCE;
RuntimeException runtimeEx = new IllegalStateException("Processing Failed");
IOException checkedEx = new IOException("File Not Found");
Kind<CompletableFutureKind.Witness, String> failedRuntimeKind = futureMonad.raiseError(runtimeEx);
Kind<CompletableFutureKind.Witness, String> failedCheckedKind = futureMonad.raiseError(checkedEx);
Kind<CompletableFutureKind.Witness, String> successKind = futureMonad.of("Original Success");
// --- Handler Function ---
// Function<Throwable, Kind<CompletableFutureKind.Witness, String>>
Function<Throwable, Kind<CompletableFutureKind.Witness, String>> recoveryHandler =
error -> {
System.out.println("Handling error: " + error.getMessage());
if (error instanceof IOException) {
// Recover from specific checked exceptions
return futureMonad.of("Recovered from IO Error");
} else if (error instanceof IllegalStateException) {
// Recover from specific runtime exceptions
return FUTURE.widen(CompletableFuture.supplyAsync(()->{
System.out.println("Async recovery..."); // Recovery can be async too!
return "Recovered from State Error (async)";
}));
} else if (error instanceof ArithmeticException) {
// Recover from ArithmeticException
return futureMonad.of("Recovered from Arithmetic Error: " + error.getMessage());
}
else {
// Re-raise unhandled errors
System.out.println("Unhandled error type: " + error.getClass().getSimpleName());
return futureMonad.raiseError(new RuntimeException("Recovery failed", error));
}
};
// --- Applying Handler ---
// Handle RuntimeException
Kind<CompletableFutureKind.Witness, String> recoveredRuntime = futureMonad.handleErrorWith(
failedRuntimeKind,
recoveryHandler
);
System.out.println("Recovered (Runtime): " + FUTURE.join(recoveredRuntime));
// Output:
// Handling error: Processing Failed
// Async recovery...
// Recovered (Runtime): Recovered from State Error (async)
// Handle CheckedException
Kind<CompletableFutureKind.Witness, String> recoveredChecked = futureMonad.handleErrorWith(
failedCheckedKind,
recoveryHandler
);
System.out.println("Recovered (Checked): " + FUTURE.join(recoveredChecked));
// Output:
// Handling error: File Not Found
// Recovered (Checked): Recovered from IO Error
// Handler is ignored for success
Kind<CompletableFutureKind.Witness, String> handledSuccess = futureMonad.handleErrorWith(
successKind,
recoveryHandler // This handler is never called
);
System.out.println("Handled (Success): " + FUTURE.join(handledSuccess));
// Output: Handled (Success): Original Success
// Example of re-raising an unhandled error
ArithmeticException unhandledEx = new ArithmeticException("Bad Maths");
Kind<CompletableFutureKind.Witness, String> failedUnhandledKind = futureMonad.raiseError(unhandledEx);
Kind<CompletableFutureKind.Witness, String> failedRecovery = futureMonad.handleErrorWith(
failedUnhandledKind,
recoveryHandler
);
try {
FUTURE.join(failedRecovery);
} catch (CompletionException e) { // join wraps the "Recovery failed" exception
System.err.println("Caught re-raised error: " + e.getCause());
System.err.println(" Original cause: " + e.getCause().getCause());
}
// Output:
// Handling error: Bad Maths
}
handleErrorWithallows you to inspect theThrowableand return a newCompletableFutureKind, potentially recovering the flow.- The handler receives the cause of the failure (unwrapped from
CompletionExceptionif necessary).
The EitherMonad:
Typed Error Handling
- How to represent computations that can succeed (Right) or fail (Left) with specific error types
- Building type-safe error handling without exceptions
- Chaining operations with automatic Left propagation
- Using fold to handle both success and failure cases
- Integration with EitherT for combining with other effects
Purpose
The Either<L, R> type represents a value that can be one of two possible types, conventionally denoted as Left and Right. Its primary purpose in functional programming and this library is to provide an explicit, type-safe way to handle computations that can result in either a successful outcome or a specific kind of failure.
Right<L, R>: By convention, represents the success case, holding a value of typeR.Left<L, R>: By convention, represents the failure or alternative case, holding a value of typeL(often an error type).
Unlike throwing exceptions, Either makes the possibility of failure explicit in the return type of a function. Unlike Optional or Maybe, which simply signal the absence of a value, Either allows carrying specific information about why a computation failed in the Left value.
We can think of Either as an extension of Maybe. The Right is equivalent to Maybe.Just, and the Left is the equivalent of Maybe.Nothing but now we can allow it to carry a value.
The implementation in this library is a sealed interface Either<L, R> with two record implementations: Left<L, R> and Right<L, R>. Both Left and Right directly implement EitherKind<L, R> (and EitherKind2<L, R> for bifunctor operations), which extend Kind<EitherKind.Witness<L>, R>. This means widen/narrow operations have zero runtime overhead—no wrapper object allocation needed.
Structure
Creating Instances
You create Either instances using the static factory methods:
// Success case
Either<String, Integer> success = Either.right(123);
// Failure case
Either<String, Integer> failure = Either.left("File not found");
// Null values are permitted in Left or Right by default in this implementation
Either<String, Integer> rightNull = Either.right(null);
Either<String, Integer> leftNull = Either.left(null);
Working with Either
Several methods are available to interact with Either values:
-
isLeft(): Returnstrueif it's aLeft,falseotherwise.isRight(): Returnstrueif it's aRight,falseotherwise.
if (success.isRight()) { System.out.println("It's Right!"); } if (failure.isLeft()) { System.out.println("It's Left!"); }
-
getLeft(): Returns thevalueif it's aLeft, otherwise throwsNoSuchElementException.getRight(): Returns thevalueif it's aRight, otherwise throwsNoSuchElementException.
try { Integer value = success.getRight(); // Returns 123 String error = failure.getLeft(); // Returns "File not found" // String errorFromSuccess = success.getLeft(); // Throws NoSuchElementException } catch (NoSuchElementException e) { System.err.println("Attempted to get the wrong side: " + e.getMessage()); }
Note: Prefer fold or pattern matching over direct getLeft/getRight calls.
-
The
foldmethod is the safest way to handle both cases by providing two functions: one for theLeftcase and one for theRightcase. It returns the result of whichever function is applied.String resultMessage = failure.fold( leftValue -> "Operation failed with: " + leftValue, // Function for Left rightValue -> "Operation succeeded with: " + rightValue // Function for Right ); // resultMessage will be "Operation failed with: File not found" String successMessage = success.fold( leftValue -> "Error: " + leftValue, rightValue -> "Success: " + rightValue ); // successMessage will be "Success: 123"
Applies a function only to the Right value, leaving a Left unchanged. This is known as being "right-biased".
Function<Integer, String> intToString = Object::toString;
Either<String, String> mappedSuccess = success.map(intToString); // Right(123) -> Right("123")
Either<String, String> mappedFailure = failure.map(intToString); // Left(...) -> Left(...) unchanged
System.out.println(mappedSuccess); // Output: Right(value=123)
System.out.println(mappedFailure); // Output: Left(value=File not found)
Applies a function that itself returns an Either to a Right value. If the initial Either is Left, it's returned unchanged. If the function applied to the Right value returns a Left, that Left becomes the result. This allows sequencing operations where each step can fail. The Left type acts as a functor that dismisses the mapped function f and returns itself (map(f) -> Left(Value)). It preserves the value it holds. After a Left is encountered, subsequent transformations via map or flatMap are typically short-circuited.
public void basicFlatMap(){
// Example: Parse string, then check if positive
Function<String, Either<String, Integer>> parse = s -> {
try { return Either.right(Integer.parseInt(s.trim())); }
catch (NumberFormatException e) { return Either.left("Invalid number"); }
};
Function<Integer, Either<String, Integer>> checkPositive = i ->
(i > 0) ? Either.right(i) : Either.left("Number not positive");
Either<String, String> input1 = Either.right(" 10 ");
Either<String, String> input2 = Either.right(" -5 ");
Either<String, String> input3 = Either.right(" abc ");
Either<String, String> input4 = Either.left("Initial error");
// Chain parse then checkPositive
Either<String, Integer> result1 = input1.flatMap(parse).flatMap(checkPositive); // Right(10)
Either<String, Integer> result2 = input2.flatMap(parse).flatMap(checkPositive); // Left("Number not positive")
Either<String, Integer> result3 = input3.flatMap(parse).flatMap(checkPositive); // Left("Invalid number")
Either<String, Integer> result4 = input4.flatMap(parse).flatMap(checkPositive); // Left("Initial error")
System.out.println(result1);
System.out.println(result2);
System.out.println(result3);
System.out.println(result4);
}
To use Either within Higher-Kinded-J framework:
-
Identify Context: You are working with
Either<L, R>whereLis your chosen error type. The HKT witness will beEitherKind.Witness<L>. -
Get Type Class Instance: Obtain an instance of
EitherMonad<L>for your specific error typeL. This instance implementsMonadError<EitherKind.Witness<L>, L>.// Assuming TestError is your error type EitherMonad<TestError> eitherMonad = EitherMonad.instance() // Now 'eitherMonad' can be used for operations on Kind<EitherKind.Witness<String>, A> -
Wrap: Convert your
Either<L, R>instances toKind<EitherKind.Witness<L>, R>usingEITHER.widen(). SinceEither<L,R>directly implementsEitherKind<L,R>.EitherMonad<String> eitherMonad = EitherMonad.instance() Either<String, Integer> myEither = Either.right(10); // F_WITNESS is EitherKind.Witness<String>, A is Integer Kind<EitherKind.Witness<String>, Integer> eitherKind = EITHER.widen(myEither); -
Apply Operations: Use the methods on the
eitherMonadinstance (map,flatMap,ap,raiseError,handleErrorWith, etc.).// Using map via the Monad instance Kind<EitherKind.Witness<String>, String> mappedKind = eitherMonad.map(Object::toString, eitherKind); System.out.println("mappedKind: " + EITHER.narrow(mappedKind)); // Output: Right[value = 10] // Using flatMap via the Monad instance Function<Integer, Kind<EitherKind.Witness<String>, Double>> nextStep = i -> EITHER.widen( (i > 5) ? Either.right(i/2.0) : Either.left("TooSmall")); Kind<EitherKind.Witness<String>, Double> flatMappedKind = eitherMonad.flatMap(nextStep, eitherKind); // Creating a Left Kind using raiseError Kind<EitherKind.Witness<String>, Integer> errorKind = eitherMonad.raiseError("E101"); // L is String here // Handling an error Kind<EitherKind.Witness<String>, Integer> handledKind = eitherMonad.handleErrorWith(errorKind, error -> { System.out.println("Handling error: " + error); return eitherMonad.of(0); // Recover with Right(0) }); -
Unwrap: Get the final
Either<L, R>back usingEITHER.narrow()when needed.Either<String, Integer> finalEither = EITHER.narrow(handledKind); System.out.println("Final unwrapped Either: " + finalEither); // Output: Right(0)
- Explicitly modelling and handling domain-specific errors (e.g., validation failures, resource not found, business rule violations).
- Sequencing operations where any step might fail with a typed error, short-circuiting the remaining steps.
- Serving as the inner type for monad transformers like
EitherTto combine typed errors with other effects like asynchronicity (see the Order Example Walkthrough). - Providing a more informative alternative to returning
nullor relying solely on exceptions for expected failure conditions.
Identity Monad (Id)
While it might seem trivial on its own, the Identity Monad plays a crucial role in a higher-kinded type library for several reasons:
-
Base Case for Monad Transformers: Many monad transformers (like
StateT,ReaderT,MaybeT, etc.) can be specialised to their simpler, non-transformed monad counterparts by usingIdas the underlying monad. For example:StateT<S, IdKind.Witness, A>is conceptually equivalent toState<S, A>.MaybeT<IdKind.Witness, A>is conceptually equivalent toMaybe<A>. This allows for a unified way to define transformers and derive base monads.
-
Generic Programming: When writing functions that are generic over any
Monad<F>,Idcan serve as the "no-effect" monad, allowing you to use these generic functions with pure values without introducing unnecessary complexity. -
Understanding Monads: It provides a clear example of the monadic structure (
of,flatMap,map) without any distracting side effects or additional computational context.
What is Id?
An Id<A> is simply a container that holds a value of type A.
Id.of(value)creates anIdinstance holdingvalue.idInstance.value()retrieves the value from theIdinstance.
Key Classes and Concepts
Id<A>: The data type itself. It's a record that wraps a value of typeA. It implementsIdKind<A>, which extendsKind<IdKind.Witness, A>.IdKind<A>: The Kind interface marker for theIdtype. It extendsKind<IdKind.Witness, A>, following the standard Higher-Kinded-J pattern used by other types likeTrampolineKindandFreeKind.IdKind.Witness: A static nested class withinIdKindused as the phantom type marker (theFinKind<F, A>) to represent theIdtype constructor at the type level. This is part of the HKT emulation pattern.IdKindHelper: A utility class providing static helper methods:narrow(Kind<IdKind.Witness, A> kind): Safely casts aKindback to a concreteId<A>.widen(Id<A> id): widens anId<A>toKind<IdKind.Witness, A>. (Often an identity cast sinceIdimplementsKind).narrows(Kind<IdKind.Witness, A> kind): A convenience to narrow and then get the value.
IdMonad: The singleton class that implementsMonad<IdKind.Witness>, providing the monadic operations forId.
Using Id and IdMonad
public void createExample(){
// Direct creation
Id<String> idString = Id.of("Hello, Identity!");
Id<Integer> idInt = Id.of(123);
Id<String> idNull = Id.of(null); // Id can wrap null
// Accessing the value
String value = idString.value(); // "Hello, Identity!"
Integer intValue = idInt.value(); // 123
String nullValue = idNull.value(); // null
}
The IdMonad provides the standard monadic operations.
public void monadExample(){
IdMonad idMonad = IdMonad.instance();
// 1. 'of' (lifting a value)
Kind<IdKind.Witness, Integer> kindInt = idMonad.of(42);
Id<Integer> idFromOf = ID.narrow(kindInt);
System.out.println("From of: " + idFromOf.value()); // Output: From of: 42
// 2. 'map' (applying a function to the wrapped value)
Kind<IdKind.Witness, String> kindStringMapped = idMonad.map(
i -> "Value is " + i,
kindInt
);
Id<String> idMapped = ID.narrow(kindStringMapped);
System.out.println("Mapped: " + idMapped.value()); // Output: Mapped: Value is 42
// 3. 'flatMap' (applying a function that returns an Id)
Kind<IdKind.Witness, String> kindStringFlatMapped = idMonad.flatMap(
i -> Id.of("FlatMapped: " + (i * 2)), // Function returns Id<String>
kindInt
);
Id<String> idFlatMapped = ID.narrow(kindStringFlatMapped);
System.out.println("FlatMapped: " + idFlatMapped.value()); // Output: FlatMapped: 84
// flatMap can also be called directly on Id if the function returns Id
Id<String> directFlatMap = idFromOf.flatMap(i -> Id.of("Direct FlatMap: " + i));
System.out.println(directFlatMap.value()); // Output: Direct FlatMap: 42
// 4. 'ap' (applicative apply)
Kind<IdKind.Witness, Function<Integer, String>> kindFunction = idMonad.of(i -> "Applied: " + i);
Kind<IdKind.Witness, String> kindApplied = idMonad.ap(kindFunction, kindInt);
Id<String> idApplied = ID.narrow(kindApplied);
System.out.println("Applied: " + idApplied.value()); // Output: Applied: 42
}
As mentioned in the StateT Monad Transformer documentation, State<S,A> can be thought of as StateT<S, IdKind.Witness, A>.
Let's illustrate how you might define a State monad type alias or use StateT with IdMonad:
public void transformerExample(){
// Conceptually, State<S, A> is StateT<S, IdKind.Witness, A>
// We can create a StateTMonad instance using IdMonad as the underlying monad.
StateTMonad<Integer, IdKind.Witness> stateMonadOverId =
StateTMonad.instance(IdMonad.instance());
// Example: A "State" computation that increments the state and returns the old state
Function<Integer, Kind<IdKind.Witness, StateTuple<Integer, Integer>>> runStateFn =
currentState -> Id.of(StateTuple.of(currentState + 1, currentState));
// Create the StateT (acting as State)
Kind<StateTKind.Witness<Integer, IdKind.Witness>, Integer> incrementAndGet =
StateTKindHelper.stateT(runStateFn, IdMonad.instance());
// Run it
Integer initialState = 10;
Kind<IdKind.Witness, StateTuple<Integer, Integer>> resultIdTuple =
StateTKindHelper.runStateT(incrementAndGet, initialState);
// Unwrap the Id and then the StateTuple
Id<StateTuple<Integer, Integer>> idTuple = ID.narrow(resultIdTuple);
StateTuple<Integer, Integer> tuple = idTuple.value();
System.out.println("Initial State: " + initialState); // Output: Initial State: 10
System.out.println("Returned Value (Old State): " + tuple.value()); // Output: Returned Value (Old State): 10
System.out.println("Final State: " + tuple.state()); // Output: Final State: 11
}
This example shows that StateT with Id behaves just like a standard State monad, where the "effect" of the underlying monad is simply identity (no additional effect).
The IOMonad:
Managing Side Effects with IO
- How to describe side effects without performing them immediately
- Building pure functional programs with deferred execution
- Composing complex side-effecting operations using
mapandflatMap - The difference between describing effects and running them with
unsafeRunSync - Creating testable, composable programs that separate logic from execution
In functional programming, managing side effects (like printing to the console, reading files, making network calls, generating random numbers, or getting the current time) while maintaining purity is a common challenge.
The IO<A> monad in higher-kinded-j provides a way to encapsulate these side-effecting computations, making them first-class values that can be composed and manipulated functionally.
The key idea is that an IO<A> value doesn't perform the side effect immediately upon creation. Instead, it represents a description or recipe for a computation that, when executed, will perform the effect and potentially produce a value of type A. The actual execution is deferred until explicitly requested.
Core Components
The IO Type
The HKT Bridge for IO
Typeclasses for IO
The IO functionality is built upon several related components:
IO<A>: The core functional interface. AnIO<A>instance essentially wraps aSupplier<A>(or similar function) that performs the side effect and returns a valueA. The crucial method isunsafeRunSync(), which executes the encapsulated computation.IO<A>directly extendsIOKind<A>, making it a first-class participant in the HKT simulation.IOKind<A>: The HKT marker interface (Kind<IOKind.Witness, A>) forIO. This allowsIOto be treated as a generic type constructorFin type classes likeFunctor,Applicative, andMonad. The witness type isIOKind.Witness. SinceIO<A>directly extends this interface, no wrapper types are needed.IOKindHelper: The essential utility class for working withIOin the HKT simulation. It provides:widen(IO<A>): Converts a concreteIO<A>instance into its HKT representationKind<IOKind.Witness, A>. SinceIOdirectly implementsIOKind, this is a null-checked cast with zero runtime overhead.narrow(Kind<IOKind.Witness, A>): Converts back to the concreteIO<A>. Performs aninstanceof IOcheck and cast. ThrowsKindUnwrapExceptionif the input Kind is invalid.delay(Supplier<A>): The primary factory method to create anIOKind<A>by wrapping a side-effecting computation described by aSupplier.unsafeRunSync(Kind<IOKind.Witness, A>): The method to execute the computation described by anIOKind. This is typically called at the "end of the world" in your application (e.g., in themainmethod) to run the composed IO program.
IOFunctor: ImplementsFunctor<IOKind.Witness>. Provides themapoperation to transform the result valueAof anIOcomputation without executing the effect.IOApplicative: ExtendsIOFunctorand implementsApplicative<IOKind.Witness>. Providesof(to lift a pure value intoIOwithout side effects) andap(to apply a function withinIOto a value withinIO).IOMonad: ExtendsIOApplicativeand implementsMonad<IOKind.Witness>. ProvidesflatMapto sequenceIOcomputations, ensuring effects happen in the intended order.
Purpose and Usage
- Encapsulating Side Effects: Describe effects (like printing, reading files, network calls) as
IOvalues without executing them immediately. - Maintaining Purity: Functions that create or combine
IOvalues remain pure. They don't perform the effects themselves, they just build up a description of the effects to be performed later. - Composition: Use
mapandflatMap(viaIOMonad) to build complex sequences of side-effecting operations from smaller, reusableIOactions. - Deferred Execution: Effects are only performed when
unsafeRunSyncis called on the final, composedIOvalue. This separates the description of the program from its execution.
Important Note: IO in this library primarily deals with deferring execution. It does not automatically provide sophisticated error handling like Either or Try, nor does it manage asynchronicity like CompletableFuture. Exceptions thrown during unsafeRunSync will typically propagate unless explicitly handled within the Supplier provided to IOKindHelper.delay. For combining IO with typed error handling, consider using EitherT<IOKind.Witness, E, A> (monad transformer) or wrapping IO operations with Try for exception handling.
Use IOKindHelper.delay to capture side effects. Use IOMonad.of for pure values within IO.
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.io.*;
import org.higherkindedj.hkt.Unit;
import java.util.function.Supplier;
import java.util.Scanner;
// Get the IOMonad instance
IOMonad ioMonad = IOMonad.INSTANCE;
// IO action to print a message
Kind<IOKind.Witness, Unit> printHello = IOKindHelper.delay(() -> {
System.out.println("Hello from IO!");
return Unit.INSTANCE;
});
// IO action to read a line from the console
Kind<IOKind.Witness, String> readLine = IOKindHelper.delay(() -> {
System.out.print("Enter your name: ");
// Scanner should ideally be managed more robustly in real apps
try (Scanner scanner = new Scanner(System.in)) {
return scanner.nextLine();
}
});
// IO action that returns a pure value (no side effect description here)
Kind<IOKind.Witness, Integer> pureValueIO = ioMonad.of(42);
// IO action that simulates getting the current time (a side effect)
Kind<IOKind.Witness, Long> currentTime = IOKindHelper.delay(System::currentTimeMillis);
// Creating an IO action that might fail internally
Kind<IOKind.Witness, String> potentiallyFailingIO = IOKindHelper.delay(() -> {
if (Math.random() < 0.5) {
throw new RuntimeException("Simulated failure!");
}
return "Success!";
});
Nothing happens when you create these IOKind values. The Supplier inside delay is not executed.
Use IOKindHelper.unsafeRunSync to run the computation.
// (Continuing from above examples)
// Execute printHello
System.out.println("Running printHello:");
IOKindHelper.unsafeRunSync(printHello); // Actually prints "Hello from IO!"
// Execute readLine (will block for user input)
// System.out.println("\nRunning readLine:");
// String name = IOKindHelper.unsafeRunSync(readLine);
// System.out.println("User entered: " + name);
// Execute pureValueIO
System.out.println("\nRunning pureValueIO:");
Integer fetchedValue = IOKindHelper.unsafeRunSync(pureValueIO);
System.out.println("Fetched pure value: " + fetchedValue); // Output: 42
// Execute potentiallyFailingIO
System.out.println("\nRunning potentiallyFailingIO:");
try {
String result = IOKindHelper.unsafeRunSync(potentiallyFailingIO);
System.out.println("Succeeded: " + result);
} catch (RuntimeException e) {
System.err.println("Caught expected failure: " + e.getMessage());
}
// Notice that running the same IO action again executes the effect again
System.out.println("\nRunning printHello again:");
IOKindHelper.unsafeRunSync(printHello); // Prints "Hello from IO!" again
Use IOMonad instance methods.
import org.higherkindedj.hkt.io.IOMonad;
import org.higherkindedj.hkt.Unit;
import java.util.function.Function;
IOMonad ioMonad = IOMonad.INSTANCE;
// --- map example ---
Kind<IOKind.Witness, String> readLineAction = IOKindHelper.delay(() -> "Test Input"); // Simulate input
// Map the result of readLineAction without executing readLine yet
Kind<IOKind.Witness, String> greetAction = ioMonad.map(
name -> "Hello, " + name + "!", // Function to apply to the result
readLineAction
);
System.out.println("Greet action created, not executed yet.");
// Now execute the mapped action
String greeting = IOKindHelper.unsafeRunSync(greetAction);
System.out.println("Result of map: " + greeting); // Output: Hello, Test Input!
// --- flatMap example ---
// Action 1: Get name
Kind<IOKind.Witness, String> getName = IOKindHelper.delay(() -> {
System.out.println("Effect: Getting name...");
return "Alice";
});
// Action 2 (depends on name): Print greeting
Function<String, Kind<IOKind.Witness, Unit>> printGreeting = name ->
IOKindHelper.delay(() -> {
System.out.println("Effect: Printing greeting for " + name);
System.out.println("Welcome, " + name + "!");
return Unit.INSTANCE;
});
// Combine using flatMap
Kind<IOKind.Witness, Void> combinedAction = ioMonad.flatMap(printGreeting, getName);
System.out.println("\nCombined action created, not executed yet.");
// Execute the combined action
IOKindHelper.unsafeRunSync(combinedAction);
// Output:
// Effect: Getting name...
// Effect: Printing greeting for Alice
// Welcome, Alice!
// --- Full Program Example ---
Kind<IOKind.Witness, Unit> program = ioMonad.flatMap(
ignored -> ioMonad.flatMap( // Chain after printing hello
name -> ioMonad.map( // Map the result of printing the greeting
ignored2 -> { System.out.println("Program finished");
return Unit.INSTANCE; },
printGreeting.apply(name) // Action 3: Print greeting based on name
),
readLine // Action 2: Read line
),
printHello // Action 1: Print Hello
);
System.out.println("\nComplete IO Program defined. Executing...");
// IOKindHelper.unsafeRunSync(program); // Uncomment to run the full program
Notes:
maptransforms the result of anIOaction without changing the effect itself (though the transformation happens after the effect runs).flatMapsequencesIOactions, ensuring the effect of the first action completes before the second action (which might depend on the first action's result) begins.
The Lazy Monad:
Lazy Evaluation with Lazy
- How to defer expensive computations until their results are actually needed
- Understanding memoisation and how results are cached after first evaluation
- Handling exceptions in lazy computations with ThrowableSupplier
- Composing lazy operations while preserving laziness
- Building efficient pipelines that avoid unnecessary work
This article introduces the Lazy<A> type and its associated components within the higher-kinded-j library. Lazy provides a mechanism for deferred computation, where a value is calculated only when needed and the result (or any exception thrown during calculation) is memoised (cached).
Core Components
The Lazy Type
The HKT Bridge for Lazy
Typeclasses for Lazy
The lazy evaluation feature revolves around these key types:
ThrowableSupplier<T>: A functional interface similar tojava.util.function.Supplier, but itsget()method is allowed to throw anyThrowable(including checked exceptions). This is used as the underlying computation forLazy.Lazy<A>: The core class representing a computation that produces a value of typeAlazily. It takes aThrowableSupplier<? extends A>during construction (Lazy.defer). Evaluation is triggered only by theforce()method, and the result or exception is cached.Lazy.now(value)creates an already evaluated instance.LazyKind<A>: The HKT marker interface (Kind<LazyKind.Witness, A>) forLazy, allowing it to be used generically with type classes likeFunctorandMonad.LazyKindHelper: A utility class providing static methods to bridge between the concreteLazy<A>type and its HKT representationLazyKind<A>. It includes:widen(Lazy<A>): Wraps aLazyinstance intoLazyKind.narrow(Kind<LazyKind.Witness, A>): UnwrapsLazyKindback toLazy. ThrowsKindUnwrapExceptionif the input Kind is invalid.defer(ThrowableSupplier<A>): Factory to create aLazyKindfrom a computation.now(A value): Factory to create an already evaluatedLazyKind.force(Kind<LazyKind.Witness, A>): Convenience method to unwrap and force evaluation.
LazyMonad: The type class instance implementingMonad<LazyKind.Witness>,Applicative<LazyKind.Witness>, andFunctor<LazyKind.Witness>. It provides standard monadic operations (map,flatMap,of,ap) forLazyKind, ensuring laziness is maintained during composition.
Purpose and Usage
- Deferred Computation: Use
Lazywhen you have potentially expensive computations that should only execute if their result is actually needed. - Memoization: The result (or exception) of the computation is stored after the first call to
force(), subsequent calls return the cached result without re-computation. - Exception Handling: Computations wrapped in
Lazy.defercan throw anyThrowable. This exception is caught, memoised, and re-thrown byforce(). - Functional Composition:
LazyMonadallows chaining lazy computations usingmapandflatMapwhile preserving laziness. The composition itself doesn't trigger evaluation; only forcing the finalLazyKinddoes. - HKT Integration:
LazyKindandLazyMonadenable using lazy computations within generic functional code expectingKind<F, A>andMonad<F>.
// 1. Deferring a computation (that might throw checked exception)
java.util.concurrent.atomic.AtomicInteger counter = new java.util.concurrent.atomic.AtomicInteger(0);
Kind<LazyKind.Witness, String> deferredLazy = LAZY.defer(() -> {
System.out.println("Executing expensive computation...");
counter.incrementAndGet();
// Simulate potential failure
if (System.currentTimeMillis() % 2 == 0) {
// Throwing a checked exception is allowed by ThrowableSupplier
throw new java.io.IOException("Simulated IO failure");
}
Thread.sleep(50); // Simulate work
return "Computed Value";
});
// 2. Creating an already evaluated Lazy
Kind<LazyKind.Witness, String> nowLazy = LAZY.now("Precomputed Value");
// 3. Using the underlying Lazy type directly (less common when using HKT)
Lazy<String> directLazy = Lazy.defer(() -> { counter.incrementAndGet(); return "Direct Lazy"; });
Evaluation only happens when force() is called (directly or via the helper).
// (Continuing from above)
System.out.println("Lazy instances created. Counter: " + counter.get()); // Output: 0
try {
// Force the deferred computation
String result1 = LAZY.force(deferredLazy); // force() throws Throwable
System.out.println("Result 1: " + result1);
System.out.println("Counter after first force: " + counter.get()); // Output: 1
// Force again - uses memoised result
String result2 = LAZY.force(deferredLazy);
System.out.println("Result 2: " + result2);
System.out.println("Counter after second force: " + counter.get()); // Output: 1 (not re-computed)
// Force the 'now' instance
String resultNow = LAZY.force(nowLazy);
System.out.println("Result Now: " + resultNow);
System.out.println("Counter after forcing 'now': " + counter.get()); // Output: 1 (no computation ran for 'now')
} catch (Throwable t) { // Catch Throwable because force() can re-throw anything
System.err.println("Caught exception during force: " + t);
// Exception is also memoised:
try {
LAZY.force(deferredLazy);
} catch (Throwable t2) {
System.err.println("Caught memoised exception: " + t2);
System.out.println("Counter after failed force: " + counter.get()); // Output: 1
}
}
LazyMonad lazyMonad = LazyMonad.INSTANCE;
counter.set(0); // Reset counter for this example
Kind<LazyKind.Witness, Integer> initialLazy = LAZY.defer(() -> { counter.incrementAndGet(); return 10; });
// --- map ---
// Apply a function lazily
Function<Integer, String> toStringMapper = i -> "Value: " + i;
Kind<LazyKind.Witness, String> mappedLazy = lazyMonad.map(toStringMapper, initialLazy);
System.out.println("Mapped Lazy created. Counter: " + counter.get()); // Output: 0
try {
System.out.println("Mapped Result: " + LAZY.force(mappedLazy)); // Triggers evaluation of initialLazy & map
// Output: Mapped Result: Value: 10
System.out.println("Counter after forcing mapped: " + counter.get()); // Output: 1
} catch (Throwable t) { /* ... */ }
// --- flatMap ---
// Sequence lazy computations
Function<Integer, Kind<LazyKind.Witness, String>> multiplyAndStringifyLazy =
i -> LAZY.defer(() -> { // Inner computation is also lazy
int result = i * 5;
return "Multiplied: " + result;
});
Kind<LazyKind.Witness, String> flatMappedLazy = lazyMonad.flatMap(multiplyAndStringifyLazy, initialLazy);
System.out.println("FlatMapped Lazy created. Counter: " + counter.get()); // Output: 1 (map already forced initialLazy)
try {
System.out.println("FlatMapped Result: " + force(flatMappedLazy)); // Triggers evaluation of inner lazy
// Output: FlatMapped Result: Multiplied: 50
} catch (Throwable t) { /* ... */ }
// --- Chaining ---
Kind<LazyKind.Witness, String> chainedLazy = lazyMonad.flatMap(
value1 -> lazyMonad.map(
value2 -> "Combined: " + value1 + " & " + value2, // Combine results
LAZY.defer(()->value1 * 2) // Second lazy step, depends on result of first
),
LAZY.defer(()->5) // First lazy step
);
try{
System.out.println("Chained Result: "+force(chainedLazy)); // Output: Combined: 5 & 10
}catch(Throwable t){/* ... */}
The ListMonad:
Monadic Operations on Java Lists
- How to work with Lists as contexts representing multiple possible values
- Using
flatMapfor non-deterministic computations and combinations - Generating Cartesian products and filtering results
- Understanding how List models choice and branching computations
- Building search algorithms and combinatorial problems with monadic operations
Purpose
The ListMonad in the Higher-Kinded-J library provides a monadic interface for Java's standard java.util.List. It allows developers to work with lists in a more functional style, enabling operations like map, flatMap, and ap (apply) within the higher-kinded type system. This is particularly useful for sequencing operations that produce lists, transforming list elements, and applying functions within a list context, all while integrating with the generic Kind<F, A> abstractions.
Key benefits include:
- Functional Composition: Easily chain operations on lists, where each operation might return a list itself.
- HKT Integration:
ListKind(the higher-kinded wrapper forList) andListMonadallowListto be used with generic functions and type classes expectingKind<F, A>,Functor<F>,Applicative<F>, orMonad<F>. - Standard List Behavior: Leverages the familiar behaviour of Java lists, such as non-uniqueness of elements and order preservation.
flatMapcorresponds to applying a function that returns a list to each element and then concatenating the results.
It implements Monad<ListKind<A>>, inheriting from Functor<ListKind<A>> and Applicative<ListKind<A>>.
Structure
How to Use ListMonad and ListKind
Creating Instances
ListKind<A> is the higher-kinded type representation for java.util.List<A>. You typically create ListKind instances using the ListKindHelper utility class or the of method from ListMonad.
LIST.widen(List)
Converts a standard java.util.List<A> into a Kind<ListKind.Witness, A>.
List<String> stringList = Arrays.asList("a", "b", "c");
Kind<ListKind.Witness, String> listKind1 = LIST.widen(stringList);
List<Integer> intList = Collections.singletonList(10);
Kind<ListKind.Witness, Integer> listKind2 = LIST.widen(intList);
List<Object> emptyList = Collections.emptyList();
Kind<ListKind.Witness, Object> listKindEmpty = LIST.widen(emptyList);
Lifts a single value into the ListKind context, creating a singleton list. A null input value results in an empty ListKind.
ListMonad listMonad = ListMonad.INSTANCE;
Kind<ListKind.Witness, String> listKindOneItem = listMonad.of("hello"); // Contains a list with one element: "hello"
Kind<ListKind.Witness, Integer> listKindAnotherItem = listMonad.of(42); // Contains a list with one element: 42
Kind<ListKind.Witness, Object> listKindFromNull = listMonad.of(null); // Contains an empty list
To get the underlying java.util.List<A> from a Kind<ListKind.Witness, A>, use LIST.narrow():
Kind<ListKind.Witness, A> listKind = LIST.widen(List.of("example"));
List<String> unwrappedList = LIST.narrow(listKind); // Returns Arrays.asList("example")
System.out.println(unwrappedList);
Key Operations
The ListMonad provides standard monadic operations:
map(Function<A, B> f, Kind<ListKind.Witness, A> fa):
Applies a function f to each element of the list within fa, returning a new ListKind containing the transformed elements.
ListMonad listMonad = ListMonad.INSTANCE;
ListKind<Integer> numbers = LIST.widen(Arrays.asList(1, 2, 3));
Function<Integer, String> intToString = i -> "Number: " + i;
ListKind<String> strings = listMonad.map(intToString, numbers);
// LIST.narrow(strings) would be: ["Number: 1", "Number: 2", "Number: 3"]
System.out.println(LIST.narrow(strings));
flatMap(Function<A, Kind<ListKind.Witness, B>> f, Kind<ListKind.Witness, A> ma):
Applies a function f to each element of the list within ma. The function f itself returns a ListKind<B>. flatMap then concatenates (flattens) all these resulting lists into a single ListKind<B>.
ListMonad listMonad = ListMonad.INSTANCE;
Kind<ListKind.Witness, Integer> initialValues = LIST.widen(Arrays.asList(1, 2, 3));
// Function that takes an integer and returns a list of itself and itself + 10
Function<Integer, Kind<ListKind.Witness, Integer>> replicateAndAddTen =
i -> LIST.widen(Arrays.asList(i, i + 10));
Kind<ListKind.Witness, Integer> flattenedList = listMonad.flatMap(replicateAndAddTen, initialValues);
// LIST.narrow(flattenedList) would be: [1, 11, 2, 12, 3, 13]
System.out.println(LIST.narrow(flattenedList));
// Example with empty list results
Function<Integer, Kind<ListKind.Witness, String>> toWordsIfEven =
i -> (i % 2 == 0) ?
LIST.widen(Arrays.asList("even", String.valueOf(i))) :
LIST.widen(new ArrayList<>()); // empty list for odd numbers
Kind<ListKind.Witness, String> wordsList = listMonad.flatMap(toWordsIfEven, initialValues);
// LIST.narrow(wordsList) would be: ["even", "2"]
System.out.println(LIST.narrow(wordsList));
ap(Kind<ListKind.Witness, Function<A, B>> ff, Kind<ListKind.Witness, A> fa):
Applies a list of functions ff to a list of values fa. This results in a new list where each function from ff is applied to each value in fa (Cartesian product style).
ListMonad listMonad = ListMonad.INSTANCE;
Function<Integer, String> addPrefix = i -> "Val: " + i;
Function<Integer, String> multiplyAndString = i -> "Mul: " + (i * 2);
Kind<ListKind.Witness, Function<Integer, String>> functions =
LIST.widen(Arrays.asList(addPrefix, multiplyAndString));
Kind<ListKind.Witness, Integer> values = LIST.widen(Arrays.asList(10, 20));
Kind<ListKind.Witness, String> appliedResults = listMonad.ap(functions, values);
// LIST.narrow(appliedResults) would be:
// ["Val: 10", "Val: 20", "Mul: 20", "Mul: 40"]
System.out.println(LIST.narrow(appliedResults));
To use ListMonad in generic contexts that operate over Kind<F, A>:
- Get an instance of
ListMonad:
ListMonad listMonad = ListMonad.INSTANCE;
- Wrap your List into
Kind:
List<Integer> myList = Arrays.asList(10, 20, 30);
Kind<ListKind.Witness, Integer> listKind = LIST.widen(myList);
- Use
ListMonadmethods:
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.list.ListKind;
import org.higherkindedj.hkt.list.ListKindHelper;
import org.higherkindedj.hkt.list.ListMonad;
import java.util.Arrays;
import java.util.List;
import java.util.function.Function;
import java.util.stream.Collectors;
public class ListMonadExample {
public static void main(String[] args) {
ListMonad listMonad = ListMonad.INSTANCE;
// 1. Create a ListKind
Kind<ListKind.Witness, Integer> numbersKind = LIST.widen(Arrays.asList(1, 2, 3, 4));
// 2. Use map
Function<Integer, String> numberToDecoratedString = n -> "*" + n + "*";
Kind<ListKind.Witness, String> stringsKind = listMonad.map(numberToDecoratedString, numbersKind);
System.out.println("Mapped: " + LIST.narrow(stringsKind));
// Expected: Mapped: [*1*, *2*, *3*, *4*]
// 3. Use flatMap
// Function: integer -> ListKind of [integer, integer*10] if even, else empty ListKind
Function<Integer, Kind<ListKind.Witness, Integer>> duplicateIfEven = n -> {
if (n % 2 == 0) {
return LIST.widen(Arrays.asList(n, n * 10));
} else {
return LIST.widen(List.of()); // Empty list
}
};
Kind<ListKind.Witness, Integer> flatMappedKind = listMonad.flatMap(duplicateIfEven, numbersKind);
System.out.println("FlatMapped: " + LIST.narrow(flatMappedKind));
// Expected: FlatMapped: [2, 20, 4, 40]
// 4. Use of
Kind<ListKind.Witness, String> singleValueKind = listMonad.of("hello world");
System.out.println("From 'of': " + LIST.narrow(singleValueKind));
// Expected: From 'of': [hello world]
Kind<ListKind.Witness, String> fromNullOf = listMonad.of(null);
System.out.println("From 'of' with null: " + LIST.narrow(fromNullOf));
// Expected: From 'of' with null: []
// 5. Use ap
Kind<ListKind.Witness, Function<Integer, String>> listOfFunctions =
LIST.widen(Arrays.asList(
i -> "F1:" + i,
i -> "F2:" + (i * i)
));
Kind<ListKind.Witness, Integer> inputNumbersForAp = LIST.widen(Arrays.asList(5, 6));
Kind<ListKind.Witness, String> apResult = listMonad.ap(listOfFunctions, inputNumbersForAp);
System.out.println("Ap result: " + LIST.narrow(apResult));
// Expected: Ap result: [F1:5, F1:6, F2:25, F2:36]
// Unwrap to get back the standard List
List<Integer> finalFlatMappedList = LIST.narrow(flatMappedKind);
System.out.println("Final unwrapped flatMapped list: " + finalFlatMappedList);
}
}
This example demonstrates how to wrap Java Lists into ListKind, apply monadic operations using ListMonad, and then unwrap them back to standard Lists.
The MaybeMonad:
Handling Optional Values with Non-Null Guarantee
- How Maybe provides null-safe optional values with guaranteed non-null contents
- The difference between Maybe and Optional (non-null guarantee in Just)
- Using Maybe as a MonadError with Unit as the error type
- Chaining operations with automatic Nothing propagation
- Building robust pipelines that handle absence gracefully
Purpose
How do you handle optional values in Java without falling into the null pointer trap? The Maybe<T> type in Higher-Kinded-J provides an elegant solution—representing a value that might be present (Just<T>) or absent (Nothing<T>), with one crucial guarantee: a Just<T> will never hold null.
The Maybe<T> type is conceptually similar to java.util.Optional<T> but with a key distinction: a Just<T> is guaranteed to hold a non-null value. This strictness helps prevent NullPointerExceptions when a value is asserted to be present. Maybe.fromNullable(T value) or MaybeMonad.of(T value) should be used if the input value could be null, as these will correctly produce a Nothing in such cases.
The MaybeMonad provides a monadic interface for Maybe, allowing for functional composition and integration with the Higher-Kinded Type (HKT) system. This facilitates chaining operations that may or may not yield a value, propagating the Nothing state automatically.
- Explicit Optionality with Non-Null Safety:
Just<T>guarantees its contained value is not null.Nothing<T>clearly indicates absence. - Functional Composition: Enables elegant chaining of operations using
map,flatMap, andap, whereNothingshort-circuits computations. - HKT Integration:
MaybeKind<A>(the HKT wrapper forMaybe<A>) andMaybeMonadallowMaybeto be used with generic functions and type classes that expectKind<F, A>,Functor<F>,Applicative<F>,Monad<M>, orMonadError<M, E>. - Error Handling for Absence:
MaybeMonadimplementsMonadError<MaybeKind.Witness, Unit>.Nothingis treated as the "error" state, withUnitas the phantom error type, signifying absence.
It implements MonadError<MaybeKind.Witness, Unit>, which transitively includes Monad<MaybeKind.Witness>, Applicative<MaybeKind.Witness>, and Functor<MaybeKind.Witness>.
Structure
How to Use MaybeMonad and Maybe
Creating Instances
Maybe<A> instances can be created directly using static factory methods on Maybe, or via MaybeMonad for HKT integration. Since Just<T> and Nothing<T> directly implement MaybeKind<T>, they are first-class participants in the HKT simulation with zero runtime overhead for widen/narrow operations.
Direct Maybe Creation:
Creates a Just holding a non-null value. Throws NullPointerException if value is null.
Maybe<String> justHello = Maybe.just("Hello"); // Just("Hello")
Maybe<String> illegalJust = Maybe.just(null); // Throws NullPointerException
Returns a singleton Nothing instance.
Maybe<Integer> noInt = Maybe.nothing(); // Nothing
Creates Just(value) if value is non-null, otherwise Nothing.
Maybe<String> fromPresent = Maybe.fromNullable("Present"); // Just("Present")
Maybe<String> fromNull = Maybe.fromNullable(null); // Nothing
MaybeKindHelper (for HKT wrapping):
MaybeKindHelper.widen(Maybe maybe)
Converts a Maybe<A> to Kind<MaybeKind.Witness, A>. Since Just and Nothing directly implement MaybeKind, this performs a null check and type-safe cast (zero overhead—no wrapper object allocation).
Kind<MaybeKind.Witness, String> kindJust = MAYBE.widen(Maybe.just("Wrapped"));
Kind<MaybeKind.Witness,Integer> kindNothing = MAYBE.widen(Maybe.nothing());
MaybeMonad Instance Methods:
Lifts a value into Kind<MaybeKind.Witness, A>. Uses Maybe.fromNullable() internally.
MaybeMonad maybeMonad = MaybeMonad.INSTANCE;
Kind<MaybeKind.Witness, String> kindFromMonad = maybeMonad.of("Monadic"); // Just("Monadic")
Kind<MaybeKind.Witness, String> kindNullFromMonad = maybeMonad.of(null); // Nothing
Creates a Kind<MaybeKind.Witness, E> representing Nothing. The error (Unit) argument is ignored.
Kind<MaybeKind.Witness, Double> errorKind = maybeMonad.raiseError(Unit.INSTANCE); // Nothing
To get the underlying Maybe<A> from a MaybeKind<A>, use MAYBE.narrow():
MaybeKind<String> kindJust = MAYBE.just("Example");
Maybe<String> unwrappedMaybe = MAYBE.narrow(kindJust); // Just("Example")
System.out.println("Unwrapped: " + unwrappedMaybe);
MaybeKind<Integer> kindNothing = MAYBE.nothing();
Maybe<Integer> unwrappedNothing = MAYBE.narrow(kindNothing); // Nothing
System.out.println("Unwrapped Nothing: " + unwrappedNothing);
Interacting with Maybe values
The Maybe interface itself provides useful methods:
isJust(): Returnstrueif it's aJust.isNothing(): Returnstrueif it's aNothing.get(): Returns the value ifJust, otherwise throwsNoSuchElementException. Use with caution.orElse(@NonNull T other): Returns the value ifJust, otherwise returnsother.orElseGet(@NonNull Supplier<? extends @NonNull T> other): Returns the value ifJust, otherwise invokesother.get().- The
Maybeinterface also has its ownmapandflatMapmethods, which are similar in behaviour to those onMaybeMonadbut operate directly onMaybeinstances.
Key Operations (via MaybeMonad)
Applies f to the value inside ma if it's Just. If ma is Nothing, or if f returns null (which Maybe.fromNullable then converts to Nothing), the result is Nothing.
void mapExample() {
MaybeMonad maybeMonad = MaybeMonad.INSTANCE;
Kind<MaybeKind.Witness, Integer> justNum = MAYBE.just(10);
Kind<MaybeKind.Witness, Integer> nothingNum = MAYBE.nothing();
Function<Integer, String> numToString = n -> "Val: " + n;
Kind<MaybeKind.Witness, String> justStr = maybeMonad.map(numToString, justNum); // Just("Val: 10")
Kind<MaybeKind.Witness, String> nothingStr = maybeMonad.map(numToString, nothingNum); // Nothing
Function<Integer, String> numToNull = n -> null;
Kind<MaybeKind.Witness, String> mappedToNull = maybeMonad.map(numToNull, justNum); // Nothing
System.out.println("Map (Just): " + MAYBE.narrow(justStr));
System.out.println("Map (Nothing): " + MAYBE.narrow(nothingStr));
System.out.println("Map (To Null): " + MAYBE.narrow(mappedToNull));
}
If ma is Just(a), applies f to a. f must return a Kind<MaybeKind.Witness, B>. If ma is Nothing, or f returns Nothing, the result is Nothing.
void flatMapExample() {
MaybeMonad maybeMonad = MaybeMonad.INSTANCE;
Function<String, Kind<MaybeKind.Witness, Integer>> parseString = s -> {
try {
return MAYBE.just(Integer.parseInt(s));
} catch (NumberFormatException e) {
return MAYBE.nothing();
}
};
Kind<MaybeKind.Witness, String> justFiveStr = MAYBE.just("5");
Kind<MaybeKind.Witness, Integer> parsedJust = maybeMonad.flatMap(parseString, justFiveStr); // Just(5)
Kind<MaybeKind.Witness, String> justNonNumStr = MAYBE.just("abc");
Kind<MaybeKind.Witness, Integer> parsedNonNum = maybeMonad.flatMap(parseString, justNonNumStr); // Nothing
System.out.println("FlatMap (Just): " + MAYBE.narrow(parsedJust));
System.out.println("FlatMap (NonNum): " + MAYBE.narrow(parsedNonNum));
}
If ff is Just(f) and fa is Just(a), applies f to a. Otherwise, Nothing.
void apExample() {
MaybeMonad maybeMonad = MaybeMonad.INSTANCE;
Kind<MaybeKind.Witness, Integer> justNum = MAYBE.just(10);
Kind<MaybeKind.Witness, Integer> nothingNum = MAYBE.nothing();
Kind<MaybeKind.Witness, Function<Integer, String>> justFunc = MAYBE.just(i -> "Result: " + i);
Kind<MaybeKind.Witness, Function<Integer, String>> nothingFunc = MAYBE.nothing();
Kind<MaybeKind.Witness, String> apApplied = maybeMonad.ap(justFunc, justNum); // Just("Result: 10")
Kind<MaybeKind.Witness, String> apNothingFunc = maybeMonad.ap(nothingFunc, justNum); // Nothing
Kind<MaybeKind.Witness, String> apNothingVal = maybeMonad.ap(justFunc, nothingNum); // Nothing
System.out.println("Ap (Applied): " + MAYBE.narrow(apApplied));
System.out.println("Ap (Nothing Func): " + MAYBE.narrow(apNothingFunc));
System.out.println("Ap (Nothing Val): " + MAYBE.narrow(apNothingVal));
}
Example: handleErrorWith(Kind<MaybeKind.Witness, A> ma, Function<Void, Kind<MaybeKind.Witness, A>> handler)
If ma is Just, it's returned. If ma is Nothing (the "error" state), handler is invoked (with Unit.INSTANCE for Unit) to provide a recovery MaybeKind.
void handleErrorWithExample() {
MaybeMonad maybeMonad = MaybeMonad.INSTANCE;
Function<Unit, Kind<MaybeKind.Witness, String>> recover = v -> MAYBE.just("Recovered");
Kind<MaybeKind.Witness, String> handledJust = maybeMonad.handleErrorWith(MAYBE.just("Original"), recover); // Just("Original")
Kind<MaybeKind.Witness, String> handledNothing = maybeMonad.handleErrorWith(MAYBE.nothing(), recover); // Just("Recovered")
System.out.println("HandleError (Just): " + MAYBE.narrow(handledJust));
System.out.println("HandleError (Nothing): " + MAYBE.narrow(handledNothing));
}
A complete example demonstrating generic usage:
public void monadExample() {
MaybeMonad maybeMonad = MaybeMonad.INSTANCE;
// 1. Create MaybeKind instances
Kind<MaybeKind.Witness, Integer> presentIntKind = MAYBE.just(100);
Kind<MaybeKind.Witness, Integer> absentIntKind = MAYBE.nothing();
Kind<MaybeKind.Witness, String> nullInputStringKind = maybeMonad.of(null); // Becomes Nothing
// 2. Use map
Function<Integer, String> intToStatus = n -> "Status: " + n;
Kind<MaybeKind.Witness, String> mappedPresent = maybeMonad.map(intToStatus, presentIntKind);
Kind<MaybeKind.Witness, String> mappedAbsent = maybeMonad.map(intToStatus, absentIntKind);
System.out.println("Mapped (Present): " + MAYBE.narrow(mappedPresent)); // Just(Status: 100)
System.out.println("Mapped (Absent): " + MAYBE.narrow(mappedAbsent)); // Nothing
// 3. Use flatMap
Function<Integer, Kind<MaybeKind.Witness, String>> intToPositiveStatusKind = n ->
(n > 0) ? maybeMonad.of("Positive: " + n) : MAYBE.nothing();
Kind<MaybeKind.Witness, String> flatMappedPresent = maybeMonad.flatMap(intToPositiveStatusKind, presentIntKind);
Kind<MaybeKind.Witness, String> flatMappedZero = maybeMonad.flatMap(intToPositiveStatusKind, maybeMonad.of(0)); // 0 is not > 0
System.out.println("FlatMapped (Present Positive): " + MAYBE.narrow(flatMappedPresent)); // Just(Positive: 100)
System.out.println("FlatMapped (Zero): " + MAYBE.narrow(flatMappedZero)); // Nothing
// 4. Use 'of' and 'raiseError'
Kind<MaybeKind.Witness, String> fromOf = maybeMonad.of("Direct Value");
Kind<MaybeKind.Witness, String> fromRaiseError = maybeMonad.raiseError(Unit.INSTANCE); // Creates Nothing
System.out.println("From 'of': " + MAYBE.narrow(fromOf)); // Just(Direct Value)
System.out.println("From 'raiseError': " + MAYBE.narrow(fromRaiseError)); // Nothing
System.out.println("From 'of(null)': " + MAYBE.narrow(nullInputStringKind)); // Nothing
// 5. Use handleErrorWith
Function<Void, Kind<MaybeKind.Witness, Integer>> recoverWithDefault =
v -> maybeMonad.of(-1); // Default value if absent
Kind<MaybeKind.Witness, Integer> recoveredFromAbsent =
maybeMonad.handleErrorWith(absentIntKind, recoverWithDefault);
Kind<MaybeKind.Witness, Integer> notRecoveredFromPresent =
maybeMonad.handleErrorWith(presentIntKind, recoverWithDefault);
System.out.println("Recovered (from Absent): " + MAYBE.narrow(recoveredFromAbsent)); // Just(-1)
System.out.println("Recovered (from Present): " + MAYBE.narrow(notRecoveredFromPresent)); // Just(100)
// Using the generic processData function
Kind<MaybeKind.Witness, String> processedPresent = processData(presentIntKind, x -> "Processed: " + x, "N/A", maybeMonad);
Kind<MaybeKind.Witness, String> processedAbsent = processData(absentIntKind, x -> "Processed: " + x, "N/A", maybeMonad);
System.out.println("Generic Process (Present): " + MAYBE.narrow(processedPresent)); // Just(Processed: 100)
System.out.println("Generic Process (Absent): " + MAYBE.narrow(processedAbsent)); // Just(N/A)
// Unwrap to get back the standard Maybe
Maybe<String> finalMappedMaybe = MAYBE.narrow(mappedPresent);
System.out.println("Final unwrapped mapped maybe: " + finalMappedMaybe); // Just(Status: 100)
}
public static <A, B> Kind<MaybeKind.Witness, B> processData(
Kind<MaybeKind.Witness, A> inputKind,
Function<A, B> mapper,
B defaultValueOnAbsence,
MaybeMonad monad
) {
// inputKind is now Kind<MaybeKind.Witness, A>, which is compatible with monad.map
Kind<MaybeKind.Witness, B> mappedKind = monad.map(mapper, inputKind);
// The result of monad.map is Kind<MaybeKind.Witness, B>.
// The handler (Unit v) -> monad.of(defaultValueOnAbsence) also produces Kind<MaybeKind.Witness, B>.
return monad.handleErrorWith(mappedKind, (Unit v) -> monad.of(defaultValueOnAbsence));
}
This example highlights how MaybeMonad facilitates working with optional values in a functional, type-safe manner, especially when dealing with the HKT abstractions and requiring non-null guarantees for present values.
The OptionalMonad:
Monadic Operations for Java Optional
- How to integrate Java's Optional with Higher-Kinded-J's type class system
- Using MonadError with Unit to represent absence as an error state
- Chaining optional operations with automatic empty propagation
- Building safe database and service call pipelines
- When to choose Optional vs Maybe for your use case
Purpose
The OptionalMonad in the Higher-Kinded-J library provides a monadic interface for Java's standard java.util.Optional<T>. It allows developers to work with Optional values in a more functional and composable style, enabling operations like map, flatMap, and ap (apply) within the higher-kinded type (HKT) system. This is particularly useful for sequencing operations that may or may not produce a value, handling the presence or absence of values gracefully.
Key benefits include:
- Functional Composition: Easily chain operations on
Optionals, where each operation might return anOptionalitself. If any step results in anOptional.empty(), subsequent operations are typically short-circuited, propagating the empty state. - HKT Integration:
OptionalKind<A>(the higher-kinded wrapper forOptional<A>) andOptionalMonadallowOptionalto be used with generic functions and type classes expectingKind<F, A>,Functor<F>,Applicative<F>,Monad<M>, or evenMonadError<M, E>. - Error Handling for Absence:
OptionalMonadimplementsMonadError<OptionalKind.Witness, Unit>. In this context,Optional.empty()is treated as the "error" state, andUnitis used as the phantom error type, signifying absence rather than a traditional exception.
It implements MonadError<OptionalKind.Witness, Unit>, which means it also transitively implements Monad<OptionalKind.Witness>, Applicative<OptionalKind.Witness>, and Functor<OptionalKind.Witness>.
Structure
How to Use OptionalMonad and OptionalKind
Creating Instances
OptionalKind<A> is the higher-kinded type representation for java.util.Optional<A>. You typically create OptionalKind instances using the OptionalKindHelper utility class or the of and raiseError methods from OptionalMonad.
OPTIONAL.widen(Optional)
Converts a standard java.util.Optional<A> into an OptionalKind<A>.
// Wrapping a present Optional
Optional<String> presentOptional = Optional.of("Hello");
OptionalKind<String> kindPresent = OPTIONAL.widen(presentOptional);
// Wrapping an empty Optional
Optional<Integer> emptyOptional = Optional.empty();
OptionalKind<Integer> kindEmpty = OPTIONAL.widen(emptyOptional);
// Wrapping an Optional that might be null (though Optional itself won't be null)
String possiblyNullValue = null;
Optional<String> nullableOptional = Optional.ofNullable(possiblyNullValue); // Results in Optional.empty()
OptionalKind<String> kindFromNullable = OPTIONAL.widen(nullableOptional);
Lifts a single value (which can be null) into the OptionalKind context. It uses Optional.ofNullable(value) internally.
OptionalMonad optionalMonad = OptionalMonad.INSTANCE;
Kind<OptionalKind.Witness, String> kindFromValue = optionalMonad.of("World"); // Wraps Optional.of("World")
Kind<OptionalKind.Witness, Integer> kindFromNullValue = optionalMonad.of(null); // Wraps Optional.empty()
Creates an empty OptionalKind. Since Unit is the error type, this method effectively represents the "error" state of an Optional, which is Optional.empty(). The error argument (which would be Unit.INSTANCE for Unit) is ignored.
OptionalMonad optionalMonad = OptionalMonad.INSTANCE;
Kind<OptionalKind.Witness, String> emptyKindFromError = optionalMonad.raiseError(Unit.INSTANCE); // Represents Optional.empty()
To get the underlying java.util.Optional<A> from an OptionalKind<A>, use OPTIONAL.narrow():
OptionalKind<String> kindPresent = OPTIONAL.widen(Optional.of("Example"));
Optional<String> unwrappedOptional = OPTIONAL.narrow(kindPresent); // Returns Optional.of("Example")
System.out.println("Unwrapped: " + unwrappedOptional);
OptionalKind<Integer> kindEmpty = OPTIONAL.widen(Optional.empty());
Optional<Integer> unwrappedEmpty = OPTIONAL.narrow(kindEmpty); // Returns Optional.empty()
System.out.println("Unwrapped Empty: " + unwrappedEmpty);
Key Operations
The OptionalMonad provides standard monadic and error-handling operations:
Applies a function f to the value inside fa if it's present. If fa is empty, it remains empty. The function f can return null, which Optional.map will turn into an Optional.empty().
public void mapExample() {
OptionalMonad optionalMonad = OptionalMonad.INSTANCE;
OptionalKind<Integer> presentNumber = OPTIONAL.widen(Optional.of(10));
OptionalKind<Integer> emptyNumber = OPTIONAL.widen(Optional.empty());
Function<Integer, String> intToString = i -> "Number: " + i;
Kind<OptionalKind.Witness, String> presentString = optionalMonad.map(intToString, presentNumber);
// OPTIONAL.narrow(presentString) would be Optional.of("Number: 10")
Kind<OptionalKind.Witness, String> emptyString = optionalMonad.map(intToString, emptyNumber);
// OPTIONAL.narrow(emptyString) would be Optional.empty()
Function<Integer, String> intToNull = i -> null;
Kind<OptionalKind.Witness, String> mappedToNull = optionalMonad.map(intToNull, presentNumber);
// OPTIONAL.narrow(mappedToNull) would be Optional.empty()
System.out.println("Map (Present): " + OPTIONAL.narrow(presentString));
System.out.println("Map (Empty): " + OPTIONAL.narrow(emptyString));
System.out.println("Map (To Null): " + OPTIONAL.narrow(mappedToNull));
}
Applies a function f to the value inside ma if it's present. The function f itself returns an OptionalKind<B>. If ma is empty, or if f returns an empty OptionalKind, the result is an empty OptionalKind.
public void flatMapExample() {
OptionalMonad optionalMonad = OptionalMonad.INSTANCE;
OptionalKind<String> presentInput = OPTIONAL.widen(Optional.of("5"));
OptionalKind<String> emptyInput = OPTIONAL.widen(Optional.empty());
Function<String, Kind<OptionalKind.Witness, Integer>> parseToIntKind = s -> {
try {
return OPTIONAL.widen(Optional.of(Integer.parseInt(s)));
} catch (NumberFormatException e) {
return OPTIONAL.widen(Optional.empty());
}
};
Kind<OptionalKind.Witness, Integer> parsedPresent = optionalMonad.flatMap(parseToIntKind, presentInput);
// OPTIONAL.narrow(parsedPresent) would be Optional.of(5)
Kind<OptionalKind.Witness, Integer> parsedEmpty = optionalMonad.flatMap(parseToIntKind, emptyInput);
// OPTIONAL.narrow(parsedEmpty) would be Optional.empty()
OptionalKind<String> nonNumericInput = OPTIONAL.widen(Optional.of("abc"));
Kind<OptionalKind.Witness, Integer> parsedNonNumeric = optionalMonad.flatMap(parseToIntKind, nonNumericInput);
// OPTIONAL.narrow(parsedNonNumeric) would be Optional.empty()
System.out.println("FlatMap (Present): " + OPTIONAL.narrow(parsedPresent));
System.out.println("FlatMap (Empty Input): " + OPTIONAL.narrow(parsedEmpty));
System.out.println("FlatMap (Non-numeric): " + OPTIONAL.narrow(parsedNonNumeric));
}
Applies an OptionalKind containing a function ff to an OptionalKind containing a value fa. If both are present, the function is applied. Otherwise, the result is empty.
public void apExample() {
OptionalMonad optionalMonad = OptionalMonad.INSTANCE;
OptionalKind<Function<Integer, String>> presentFuncKind =
OPTIONAL.widen(Optional.of(i -> "Value: " + i));
OptionalKind<Function<Integer, String>> emptyFuncKind =
OPTIONAL.widen(Optional.empty());
OptionalKind<Integer> presentValueKind = OPTIONAL.widen(Optional.of(100));
OptionalKind<Integer> emptyValueKind = OPTIONAL.widen(Optional.empty());
// Both present
Kind<OptionalKind.Witness, String> result1 = optionalMonad.ap(presentFuncKind, presentValueKind);
// OPTIONAL.narrow(result1) is Optional.of("Value: 100")
// Function empty
Kind<OptionalKind.Witness, String> result2 = optionalMonad.ap(emptyFuncKind, presentValueKind);
// OPTIONAL.narrow(result2) is Optional.empty()
// Value empty
Kind<OptionalKind.Witness, String> result3 = optionalMonad.ap(presentFuncKind, emptyValueKind);
// OPTIONAL.narrow(result3) is Optional.empty()
System.out.println("Ap (Both Present): " + OPTIONAL.narrow(result1));
System.out.println("Ap (Function Empty): " + OPTIONAL.narrow(result2));
System.out.println("Ap (Value Empty): " + OPTIONAL.narrow(result3));
}
Example: handleErrorWith(Kind<OptionalKind.Witness, A> ma, Function<Unit, Kind<OptionalKind.Witness, A>> handler)
If ma is present, it's returned. If ma is empty (the "error" state), the handler function is invoked (with Unit.INSTANCE as the Unit argument) to provide a recovery OptionalKind.
public void handleErrorWithExample() {
OptionalMonad optionalMonad = OptionalMonad.INSTANCE;
Kind<OptionalKind.Witness, String> presentKind = OPTIONAL.widen(Optional.of("Exists"));
OptionalKind<String> emptyKind = OPTIONAL.widen(Optional.empty());
Function<Unit, Kind<OptionalKind.Witness, String>> recoveryFunction =
(Unit unitInstance) -> OPTIONAL.widen(Optional.of("Recovered Value"));
// Handling error on a present OptionalKind
Kind<OptionalKind.Witness, String> handledPresent =
optionalMonad.handleErrorWith(presentKind, recoveryFunction);
// OPTIONAL.narrow(handledPresent) is Optional.of("Exists")
// Handling error on an empty OptionalKind
Kind<OptionalKind.Witness, String> handledEmpty =
optionalMonad.handleErrorWith(emptyKind, recoveryFunction);
// OPTIONAL.narrow(handledEmpty) is Optional.of("Recovered Value")
System.out.println("HandleError (Present): " + OPTIONAL.narrow(handledPresent));
System.out.println("HandleError (Empty): " + OPTIONAL.narrow(handledEmpty));
}
To use OptionalMonad in generic contexts that operate over Kind<F, A>:
public void monadExample() {
OptionalMonad optionalMonad = OptionalMonad.INSTANCE;
// 1. Create OptionalKind instances
OptionalKind<Integer> presentIntKind = OPTIONAL.widen(Optional.of(10));
Kind<OptionalKind.Witness, Integer> emptyIntKind = optionalMonad.raiseError(null); // Creates empty
// 2. Use map
Function<Integer, String> intToMessage = n -> "Value is " + n;
Kind<OptionalKind.Witness, String> mappedPresent = optionalMonad.map(intToMessage, presentIntKind);
Kind<OptionalKind.Witness, String> mappedEmpty = optionalMonad.map(intToMessage, emptyIntKind);
System.out.println("Mapped (Present): " + OPTIONAL.narrow(mappedPresent)); // Optional[Value is 10]
System.out.println("Mapped (Empty): " + OPTIONAL.narrow(mappedEmpty)); // Optional.empty
// 3. Use flatMap
Function<Integer, Kind<OptionalKind.Witness, Double>> intToOptionalDouble = n ->
(n > 0) ? optionalMonad.of(n / 2.0) : optionalMonad.raiseError(null);
Kind<OptionalKind.Witness, Double> flatMappedPresent = optionalMonad.flatMap(intToOptionalDouble, presentIntKind);
Kind<OptionalKind.Witness, Double> flatMappedEmpty = optionalMonad.flatMap(intToOptionalDouble, emptyIntKind);
Kind<OptionalKind.Witness, Integer> zeroIntKind = optionalMonad.of(0);
Kind<OptionalKind.Witness, Double> flatMappedZero = optionalMonad.flatMap(intToOptionalDouble, zeroIntKind);
System.out.println("FlatMapped (Present): " + OPTIONAL.narrow(flatMappedPresent)); // Optional[5.0]
System.out.println("FlatMapped (Empty): " + OPTIONAL.narrow(flatMappedEmpty)); // Optional.empty
System.out.println("FlatMapped (Zero): " + OPTIONAL.narrow(flatMappedZero)); // Optional.empty
// 4. Use 'of' and 'raiseError' (already shown in creation)
// 5. Use handleErrorWith
Function<Unit, Kind<OptionalKind.Witness, Integer>> recoverWithDefault =
v -> optionalMonad.of(-1); // Default value if empty
Kind<OptionalKind.Witness, Integer> recoveredFromEmpty =
optionalMonad.handleErrorWith(emptyIntKind, recoverWithDefault);
Kind<OptionalKind.Witness, Integer> notRecoveredFromPresent =
optionalMonad.handleErrorWith(presentIntKind, recoverWithDefault);
System.out.println("Recovered (from Empty): " + OPTIONAL.narrow(recoveredFromEmpty)); // Optional[-1]
System.out.println("Recovered (from Present): " + OPTIONAL.narrow(notRecoveredFromPresent)); // Optional[10]
// Unwrap to get back the standard Optional
Optional<String> finalMappedOptional = OPTIONAL.narrow(mappedPresent);
System.out.println("Final unwrapped mapped optional: " + finalMappedOptional);
}
This example demonstrates wrapping Optionals, applying monadic and error-handling operations via OptionalMonad, and unwrapping back to standard Optionals. The MonadError capabilities allow treating absence (Optional.empty) as a recoverable "error" state.
The Reader Monad:
Managed Dependencies and Configuration
- How to inject dependencies functionally without passing them everywhere
- Building computations that depend on shared configuration or context
- Using
askto access the environment andlocalto modify it - Creating testable code with explicit dependency management
- Real-world examples with database connections and API configurations
Purpose
The Reader monad is a functional programming pattern primarily used for managing dependencies and context propagation in a clean and composable way. Imagine you have multiple functions or components that all need access to some shared, read-only environment, such as:
- Configuration settings (database URLs, API keys, feature flags).
- Shared resources (thread pools, connection managers).
- User context (userLogin ID, permissions).
Instead of explicitly passing this environment object as an argument to every single function (which can become cumbersome and clutter signatures), the Reader monad encapsulates computations that depend on such an environment.
A Reader<R, A> represents a computation that, when provided with an environment of type R, will produce a value of type A. It essentially wraps a function R -> A.
The benefits of using the Reader monad include:
- Implicit Dependency Injection: The environment (
R) is implicitly passed along the computation chain. Functions defined within the Reader context automatically get access to the environment when needed, without needing it explicitly in their signature. - Composability: Reader computations can be easily chained together using standard monadic operations like
mapandflatMap. - Testability: Dependencies are managed explicitly when the final Reader computation is run, making it easier to provide mock environments or configurations during testing.
- Code Clarity: Reduces the need to pass configuration objects through multiple layers of functions.
In Higher-Kinded-J, the Reader monad pattern is implemented via the Reader<R, A> interface and its corresponding HKT simulation types (ReaderKind, ReaderKindHelper) and type class instances (ReaderMonad, ReaderApplicative, ReaderFunctor).
Structure
The Reader<R, A> Type
The core type is the Reader<R, A> functional interface:
@FunctionalInterface
public interface Reader<R, A> {
@Nullable A run(@NonNull R r); // The core function: Environment -> Value
// Static factories
static <R, A> @NonNull Reader<R, A> of(@NonNull Function<R, A> runFunction);
static <R, A> @NonNull Reader<R, A> constant(@Nullable A value);
static <R> @NonNull Reader<R, R> ask();
// Instance methods (for composition)
default <B> @NonNull Reader<R, B> map(@NonNull Function<? super A, ? extends B> f);
default <B> @NonNull Reader<R, B> flatMap(@NonNull Function<? super A, ? extends Reader<R, ? extends B>> f);
}
run(R r): Executes the computation by providing the environmentrand returning the resultA.of(Function<R, A>): Creates aReaderfrom a given function.constant(A value): Creates aReaderthat ignores the environment and always returns the provided value.ask(): Creates aReaderthat simply returns the environment itself as the result.map(Function<A, B>): Transforms the resultAtoBafter the reader is run, without affecting the required environmentR.flatMap(Function<A, Reader<R, B>>): Sequences computations. It runs the first reader, uses its resultAto create a second reader (Reader<R, B>), and then runs that second reader with the original environmentR.
Reader Components
To integrate Reader with Higher-Kinded-J:
ReaderKind<R, A>: The marker interface extendingKind<ReaderKind.Witness<R>, A>. The witness typeFisReaderKind.Witness<R>(whereRis fixed for a given monad instance), and the value typeAis the result type of the reader.ReaderKindHelper: The utility class with static methods:widen(Reader<R, A>): Converts aReadertoReaderKind<R, A>.narrow(Kind<ReaderKind.Witness<R>, A>): ConvertsReaderKindback toReader. ThrowsKindUnwrapExceptionif the input is invalid.reader(Function<R, A>): Factory method to create aReaderKindfrom a function.constant(A value): Factory method for aReaderKindreturning a constant value.ask(): Factory method for aReaderKindthat returns the environment.runReader(Kind<ReaderKind.Witness<R>, A> kind, R environment): The primary way to execute aReaderKindcomputation by providing the environment.
Type Class Instances (ReaderFunctor, ReaderApplicative, ReaderMonad)
These classes provide the standard functional operations for ReaderKind.Witness<R>, allowing you to treat Reader computations generically within Higher-Kinded-J:
ReaderFunctor<R>: ImplementsFunctor<ReaderKind.Witness<R>>. Provides themapoperation.ReaderApplicative<R>: ExtendsReaderFunctor<R>and implementsApplicative<ReaderKind.Witness<R>>. Providesof(lifting a value) andap(applying a wrapped function to a wrapped value).ReaderMonad<R>: ExtendsReaderApplicative<R>and implementsMonad<ReaderKind.Witness<R>>. ProvidesflatMapfor sequencing computations that depend on previous results while implicitly carrying the environmentR.
You typically instantiate ReaderMonad<R> for the specific environment type R you are working with.
1. Define Your Environment
// Example Environment: Application Configuration
record AppConfig(String databaseUrl, int timeoutMillis, String apiKey) {}
2. Create Reader Computations
Use ReaderKindHelper factory methods:
import static org.higherkindedj.hkt.reader.ReaderKindHelper.*;
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.reader.ReaderKind;
// Reader that retrieves the database URL from the config
Kind<ReaderKind.Witness<AppConfig>, String> getDbUrl = reader(AppConfig::databaseUrl);
// Reader that retrieves the timeout
Kind<ReaderKind.Witness<AppConfig>, Integer> getTimeout = reader(AppConfig::timeoutMillis);
// Reader that returns a constant value, ignoring the environment
Kind<ReaderKind.Witness<AppConfig>, String> getDefaultUser = constant("guest");
// Reader that returns the entire configuration environment
Kind<ReaderKind.Witness<AppConfig>, AppConfig> getConfig = ask();
3. Get the ReaderMonad Instance
Instantiate the monad for your specific environment type R.
import org.higherkindedj.hkt.reader.ReaderMonad;
// Monad instance for computations depending on AppConfig
ReaderMonad<AppConfig> readerMonad = new ReaderMonad<>();
4. Compose Computations using map and flatMap
Use the methods on the readerMonad instance.
// Example 1: Map the timeout value
Kind<ReaderKind.Witness<AppConfig>, String> timeoutMessage = readerMonad.map(
timeout -> "Timeout is: " + timeout + "ms",
getTimeout // Input: Kind<ReaderKind.Witness<AppConfig>, Integer>
);
// Example 2: Use flatMap to get DB URL and then construct a connection string (depends on URL)
Function<String, Kind<ReaderKind.Witness<AppConfig>, String>> buildConnectionString =
dbUrl -> reader( // <- We return a new Reader computation
config -> dbUrl + "?apiKey=" + config.apiKey() // Access apiKey via the 'config' env
);
Kind<ReaderKind.Witness<AppConfig>, String> connectionStringReader = readerMonad.flatMap(
buildConnectionString, // Function: String -> Kind<ReaderKind.Witness<AppConfig>, String>
getDbUrl // Input: Kind<ReaderKind.Witness<AppConfig>, String>
);
// Example 3: Combine multiple values using mapN (from Applicative)
Kind<ReaderKind.Witness<AppConfig>, String> dbInfo = readerMonad.map2(
getDbUrl,
getTimeout,
(url, timeout) -> "DB: " + url + " (Timeout: " + timeout + ")"
);
5. Run the Computation
Provide the actual environment using ReaderKindHelper.runReader:
AppConfig productionConfig = new AppConfig("prod-db.example.com", 5000, "prod-key-123");
AppConfig stagingConfig = new AppConfig("stage-db.example.com", 10000, "stage-key-456");
// Run the composed computations with different environments
String prodTimeoutMsg = runReader(timeoutMessage, productionConfig);
String stageTimeoutMsg = runReader(timeoutMessage, stagingConfig);
String prodConnectionString = runReader(connectionStringReader, productionConfig);
String stageConnectionString = runReader(connectionStringReader, stagingConfig);
String prodDbInfo = runReader(dbInfo, productionConfig);
String stageDbInfo = runReader(dbInfo, stagingConfig);
// Get the raw config using ask()
AppConfig retrievedProdConfig = runReader(getConfig, productionConfig);
System.out.println("Prod Timeout: " + prodTimeoutMsg); // Output: Timeout is: 5000ms
System.out.println("Stage Timeout: " + stageTimeoutMsg); // Output: Timeout is: 10000ms
System.out.println("Prod Connection: " + prodConnectionString); // Output: prod-db.example.com?apiKey=prod-key-123
System.out.println("Stage Connection: " + stageConnectionString);// Output: stage-db.example.com?apiKey=stage-key-456
System.out.println("Prod DB Info: " + prodDbInfo); // Output: DB: prod-db.example.com (Timeout: 5000)
System.out.println("Stage DB Info: " + stageDbInfo); // Output: DB: stage-db.example.com (Timeout: 10000)
System.out.println("Retrieved Prod Config: " + retrievedProdConfig); // Output: AppConfig[databaseUrl=prod-db.example.com, timeoutMillis=5000, apiKey=prod-key-123]
Notice how the functions (buildConnectionString, the lambda in map2) don't need AppConfig as a parameter, but they can access it when needed within the reader(...) factory or implicitly via flatMap composition. The configuration is only provided once at the end when runReader is called.
Sometimes, a computation depending on an environment R might perform an action (like logging or initialising a component based on R) but doesn't produce a specific value other than signaling its completion. In such cases, the result type A of the Reader<R, A> can be org.higherkindedj.hkt.Unit.
import static org.higherkindedj.hkt.reader.ReaderKindHelper.*;
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.reader.ReaderKind;
import org.higherkindedj.hkt.reader.ReaderMonad;
import org.higherkindedj.hkt.Unit; // Import Unit
// Assume AppConfig is defined as before
// record AppConfig(String databaseUrl, int timeoutMillis, String apiKey) {}
// ReaderMonad instance (can be the same as before)
// ReaderMonad<AppConfig> readerMonad = new ReaderMonad<>();
// A Reader computation that performs a side-effect (printing to console)
// using the config and returns Unit.
Kind<ReaderKind.Witness<AppConfig>, Unit> logApiKey = reader(
config -> {
System.out.println("Accessed API Key: " + config.apiKey().substring(0, Math.min(config.apiKey().length(), 4)) + "...");
return Unit.INSTANCE; // Explicitly return Unit.INSTANCE
}
);
// You can compose this with other Reader computations.
// For example, get the DB URL and then log the API key.
Kind<ReaderKind.Witness<AppConfig>, Unit> getUrlAndLogKey = readerMonad.flatMap(
dbUrl -> {
System.out.println("Database URL for logging context: " + dbUrl);
// After processing dbUrl (here, just printing), return the next action
return logApiKey;
},
getDbUrl // Assuming getDbUrl: Kind<ReaderKind.Witness<AppConfig>, String>
);
// To run it:
// AppConfig currentConfig = new AppConfig("prod-db.example.com", 5000, "prod-key-123");
// Unit result = runReader(logApiKey, currentConfig);
// System.out.println("Log API Key result: " + result); // Output: Log API Key result: ()
// Unit resultChained = runReader(getUrlAndLogKey, currentConfig);
// System.out.println("Get URL and Log Key result: " + resultChained);
// Output:
// Database URL for logging context: prod-db.example.com
// Accessed API Key: prod...
// Get URL and Log Key result: ()
In this example:
logApiKeyis aReader<AppConfig, Unit>. Its purpose is to perform an action (logging) using theAppConfig.- It returns
Unit.INSTANCEto signify that the action completed successfully but yields no other specific data. - When composing, flatMap can be used to sequence such an action. If logApiKey were the last step in a sequence, the overall
flatMapchain would also result inKind<ReaderKind.Witness<AppConfig>, Unit>.
The Reader monad (Reader<R, A>, ReaderKind, ReaderMonad) in Higher-Kinded-J provides a functional approach to dependency injection and configuration management.
It allows you to define computations that depend on a read-only environment R without explicitly passing R everywhere. By using Higher-Kinded-J and the ReaderMonad, you can compose these dependent functions cleanly using map and flatMap, providing the actual environment only once when the final computation is executed via runReader.
This leads to more modular, testable, and less cluttered code when dealing with shared context.
For deeper exploration of the Reader monad and dependency injection patterns:
Foundational Resources:
- Cats Documentation: Reader Monad - Scala implementation with practical examples
- Haskell Wiki: Reader Monad - Theoretical foundation and use cases
- Mark Seemann: Dependency Injection Revisited - Functional alternatives to traditional DI
Java-Focused Resources:
- Functional Java: Reader implementation - Pure functional library for Java with Reader monad
- Vavr Documentation: Function composition patterns - Demonstrates functional composition techniques applicable to Reader pattern
- Baeldung: Introduction to Vavr - Java FP patterns and idioms
- Richard Warburton: "Java 8 Lambdas" (O'Reilly, 2014) - Functional programming fundamentals in Java
- Pierre-Yves Saumont: "Functional Programming in Java" (Manning, 2017) - Chapter on handling dependencies functionally
The State Monad:
Managing State Functionally
- How to manage state functionally without mutable variables
- Using
get,set,modify, andinspectfor state operations - Building complex stateful workflows with automatic state threading
- Creating a bank account simulation with transaction history
- Why pure state management leads to more testable and maintainable code
Purpose
State is everywhere in programming—counters increment, configurations update, game characters level up. Yet managing state functionally, without mutation, often feels like fighting the paradigm. The State monad resolves this tension elegantly.
In many applications, we need to manage computations that involve state that changes over time.
Examples include:
- A counter being incremented.
- A configuration object being updated.
- The state of a game character.
- Parsing input where the current position needs to be tracked.
While imperative programming uses mutable variables, functional programming prefers immutability. The State monad provides a purely functional way to handle stateful computations without relying on mutable variables.
A State<S, A> represents a computation that takes an initial state S and produces a result value A along with a new, updated state S. It essentially wraps a function of the type S -> (A, S).
Key Benefits
- Explicit State: The state manipulation is explicitly encoded within the type
State<S, A>. - Purity: Functions using the State monad remain pure; they don't cause side effects by mutating external state. Instead, they describe how the state should transform.
- Composability: State computations can be easily sequenced using standard monadic operations (
map,flatMap), where the state is automatically threaded through the sequence without explicitly threading state everywhere. - Decoupling: Logic is decoupled from state handling mechanics.
- Testability: Pure state transitions are easier to test and reason about than code relying on mutable side effects.
In Higher-Kinded-J, the State monad pattern is implemented via the State<S, A> interface, its associated StateTuple<S, A> record, the HKT simulation types (StateKind, StateKindHelper), and the type class instances (StateMonad, StateApplicative, StateFunctor).
Structure
The State<S, A> Type and StateTuple<S, A>
The core type is the State<S, A> functional interface:
@FunctionalInterface
public interface State<S, A> {
// Represents the result: final value A and final state S
record StateTuple<S, A>(@Nullable A value, @NonNull S state) { /* ... */ }
// The core function: Initial State -> (Result Value, Final State)
@NonNull StateTuple<S, A> run(@NonNull S initialState);
// Static factories
static <S, A> @NonNull State<S, A> of(@NonNull Function<@NonNull S, @NonNull StateTuple<S, A>> runFunction);
static <S, A> @NonNull State<S, A> pure(@Nullable A value); // Creates State(s -> (value, s))
static <S> @NonNull State<S, S> get(); // Creates State(s -> (s, s))
static <S> @NonNull State<S, Unit> set(@NonNull S newState); // Creates State(s -> (Unit.INSTANCE, newState))
static <S> @NonNull State<S, Unit> modify(@NonNull Function<@NonNull S, @NonNull S> f); // Creates State(s -> (Unit.INSTANCE, f(s)))
static <S, A> @NonNull State<S, A> inspect(@NonNull Function<@NonNull S, @Nullable A> f); // Creates State(s -> (f(s), s))
// Instance methods for composition
default <B> @NonNull State<S, B> map(@NonNull Function<? super A, ? extends B> f);
default <B> @NonNull State<S, B> flatMap(@NonNull Function<? super A, ? extends State<S, ? extends B>> f);
}
StateTuple<S, A>: A simple record holding the pair(value: A, state: S)returned by running aStatecomputation.run(S initialState): Executes the stateful computation by providing the starting state.of(...): The basic factory method taking the underlying functionS -> StateTuple<S, A>.pure(A value): Creates a computation that returns the given valueAwithout changing the state.get(): Creates a computation that returns the current stateSas its value, leaving the state unchanged.set(S newState): Creates a computation that replaces the current state withnewStateand returnsUnit.INSTANCEas its result value.modify(Function<S, S> f): Creates a computation that applies a functionfto the current state to get the new state, returning Unit.INSTANCE as its result value.inspect(Function<S, A> f): Creates a computation that applies a functionfto the current state to calculate a result valueA, leaving the state unchanged.map(...): Transforms the result valueAtoBafter the computation runs, leaving the state transition logic untouched.flatMap(...): The core sequencing operation. It runs the firstStatecomputation, takes its result valueA, uses it to create a secondStatecomputation, and runs that second computation using the state produced by the first one. The final result and state are those from the second computation.
State Components
To integrate State with Higher-Kinded-J:
StateKind<S, A>: The marker interface extendingKind<StateKind.Witness<S>, A>. The witness typeFisStateKind.Witness<S>(whereSis fixed for a given monad instance), and the value typeAis the result typeAfromStateTuple.StateKindHelper: The utility class with static methods:widen(State<S, A>): Converts aStatetoKind<StateKind.Witness<S>, A>.narrow(Kind<StateKind.Witness<S>, A>): ConvertsStateKindback toState. ThrowsKindUnwrapExceptionif the input is invalid.pure(A value): Factory forKindequivalent toState.pure.get(): Factory forKindequivalent toState.get.set(S newState): Factory forKindequivalent toState.set.modify(Function<S, S> f): Factory forKindequivalent toState.modify.inspect(Function<S, A> f): Factory forKindequivalent toState.inspect.runState(Kind<StateKind.Witness<S>, A> kind, S initialState): Runs the computation and returns theStateTuple<S, A>.evalState(Kind<StateKind.Witness<S>, A> kind, S initialState): Runs the computation and returns only the final valueA.execState(Kind<StateKind.Witness<S>, A> kind, S initialState): Runs the computation and returns only the final stateS.
Type Class Instances (StateFunctor, StateApplicative, StateMonad)
These classes provide the standard functional operations for StateKind.Witness<S>:
StateFunctor<S>: ImplementsFunctor<StateKind.Witness<S>>. Providesmap.StateApplicative<S>: ExtendsStateFunctor<S>, implementsApplicative<StateKind.Witness<S>>. Providesof(same aspure) andap.StateMonad<S>: ExtendsStateApplicative<S>, implementsMonad<StateKind.Witness<S>>. ProvidesflatMapfor sequencing stateful computations.
You instantiate StateMonad<S> for the specific state type S you are working with.
We want to model a bank account where we can:
- Deposit funds.
- Withdraw funds (if sufficient balance).
- Get the current balance.
- Get the transaction history.
All these operations will affect or depend on the account's state (balance and history).
1. Define the State
First, we define a record to represent the state of our bank account.
public record AccountState(BigDecimal balance, List<Transaction> history) {
public AccountState {
requireNonNull(balance, "Balance cannot be null.");
requireNonNull(history, "History cannot be null.");
// Ensure history is unmodifiable and a defensive copy is made.
history = Collections.unmodifiableList(new ArrayList<>(history));
}
// Convenience constructor for initial state
public static AccountState initial(BigDecimal initialBalance) {
requireNonNull(initialBalance, "Initial balance cannot be null");
if (initialBalance.compareTo(BigDecimal.ZERO) < 0) {
throw new IllegalArgumentException("Initial balance cannot be negative.");
}
Transaction initialTx = new Transaction(
TransactionType.INITIAL_BALANCE,
initialBalance,
LocalDateTime.now(),
"Initial account balance"
);
// The history now starts with this initial transaction
return new AccountState(initialBalance, Collections.singletonList(initialTx));
}
public AccountState addTransaction(Transaction transaction) {
requireNonNull(transaction, "Transaction cannot be null");
List<Transaction> newHistory = new ArrayList<>(history); // Takes current history
newHistory.add(transaction); // Adds new one
return new AccountState(this.balance, Collections.unmodifiableList(newHistory));
}
public AccountState withBalance(BigDecimal newBalance) {
requireNonNull(newBalance, "New balance cannot be null");
return new AccountState(newBalance, this.history);
}
}
2. Define Transaction Types
We'll also need a way to represent transactions.
public enum TransactionType {
INITIAL_BALANCE,
DEPOSIT,
WITHDRAWAL,
REJECTED_WITHDRAWAL,
REJECTED_DEPOSIT
}
public record Transaction(
TransactionType type, BigDecimal amount, LocalDateTime timestamp, String description) {
public Transaction {
requireNonNull(type, "Transaction type cannot be null");
requireNonNull(amount, "Transaction amount cannot be null");
requireNonNull(timestamp, "Transaction timestamp cannot be null");
requireNonNull(description, "Transaction description cannot be null");
if (type != INITIAL_BALANCE && amount.compareTo(BigDecimal.ZERO) <= 0) {
if (!(type == REJECTED_DEPOSIT && amount.compareTo(BigDecimal.ZERO) <= 0)
&& !(type == REJECTED_WITHDRAWAL && amount.compareTo(BigDecimal.ZERO) <= 0)) {
throw new IllegalArgumentException(
"Transaction amount must be positive for actual operations.");
}
}
}
}
3. Define State Actions
Now, we define our bank operations as functions that return Kind<StateKind.Witness<AccountState>, YourResultType>.
These actions describe how the state should change and what value they produce.
We'll put these in a BankAccountWorkflow.java class.
public class BankAccountWorkflow {
private static final StateMonad<AccountState> accountStateMonad = new StateMonad<>();
public static Function<BigDecimal, Kind<StateKind.Witness<AccountState>, Unit>> deposit(
String description) {
return amount ->
STATE.widen(
State.modify(
currentState -> {
if (amount.compareTo(BigDecimal.ZERO) <= 0) {
// For rejected deposit, log the problematic amount
Transaction rejected =
new Transaction(
TransactionType.REJECTED_DEPOSIT,
amount,
LocalDateTime.now(),
"Rejected Deposit: " + description + " - Invalid Amount " + amount);
return currentState.addTransaction(rejected);
}
BigDecimal newBalance = currentState.balance().add(amount);
Transaction tx =
new Transaction(
TransactionType.DEPOSIT, amount, LocalDateTime.now(), description);
return currentState.withBalance(newBalance).addTransaction(tx);
}));
}
public static Function<BigDecimal, Kind<StateKind.Witness<AccountState>, Boolean>> withdraw(
String description) {
return amount ->
STATE.widen(
State.of(
currentState -> {
if (amount.compareTo(BigDecimal.ZERO) <= 0) {
// For rejected withdrawal due to invalid amount, log the problematic amount
Transaction rejected =
new Transaction(
TransactionType.REJECTED_WITHDRAWAL,
amount,
LocalDateTime.now(),
"Rejected Withdrawal: " + description + " - Invalid Amount " + amount);
return new StateTuple<>(false, currentState.addTransaction(rejected));
}
if (currentState.balance().compareTo(amount) >= 0) {
BigDecimal newBalance = currentState.balance().subtract(amount);
Transaction tx =
new Transaction(
TransactionType.WITHDRAWAL, amount, LocalDateTime.now(), description);
AccountState updatedState =
currentState.withBalance(newBalance).addTransaction(tx);
return new StateTuple<>(true, updatedState);
} else {
// For rejected withdrawal due to insufficient funds, log the amount that was
// attempted
Transaction tx =
new Transaction(
TransactionType.REJECTED_WITHDRAWAL,
amount,
LocalDateTime.now(),
"Rejected Withdrawal: "
+ description
+ " - Insufficient Funds. Balance: "
+ currentState.balance());
AccountState updatedState = currentState.addTransaction(tx);
return new StateTuple<>(false, updatedState);
}
}));
}
public static Kind<StateKind.Witness<AccountState>, BigDecimal> getBalance() {
return STATE.widen(State.inspect(AccountState::balance));
}
public static Kind<StateKind.Witness<AccountState>, List<Transaction>> getHistory() {
return STATE.widen(State.inspect(AccountState::history));
}
// ... main method will be added
}
4. Compose Computations using map and flatMap
We use flatMap and map from accountStateMonad to sequence these actions. The state is threaded automatically.
public class BankAccountWorkflow {
// ... (monad instance and previous actions)
public static void main(String[] args) {
// Initial state: Account with £100 balance.
AccountState initialState = AccountState.initial(new BigDecimal("100.00"));
var workflow =
For.from(accountStateMonad, deposit("Salary").apply(new BigDecimal("20.00")))
.from(a -> withdraw("Bill Payment").apply(new BigDecimal("50.00")))
.from(b -> withdraw("Groceries").apply(new BigDecimal("70.00")))
.from(c -> getBalance())
.from(t -> getHistory())
.yield((deposit, w1, w2, bal, history) -> {
var report = new StringBuilder();
history.forEach(tx -> report.append(" - %s\n".formatted(tx)));
return report.toString();
});
StateTuple<AccountState, String> finalResultTuple =
StateKindHelper.runState(workflow, initialState);
System.out.println(finalResultTuple.value());
System.out.println("\nDirect Final Account State:");
System.out.println("Balance: £" + finalResultTuple.state().balance());
System.out.println(
"History contains " + finalResultTuple.state().history().size() + " transaction(s):");
finalResultTuple.state().history().forEach(tx -> System.out.println(" - " + tx));
}
}
5. Run the Computation
The StateKindHelper.runState(workflow, initialState) call executes the entire sequence of operations, starting with initialState.
It returns a StateTuple containing the final result of the entire workflow (in this case, the String report) and the final state of the AccountState.
Direct Final Account State:
Balance: £0.00
History contains 4 transaction(s):
- Transaction[type=INITIAL_BALANCE, amount=100.00, timestamp=2025-05-18T17:35:53.564874439, description=Initial account balance]
- Transaction[type=DEPOSIT, amount=20.00, timestamp=2025-05-18T17:35:53.578424630, description=Salary]
- Transaction[type=WITHDRAWAL, amount=50.00, timestamp=2025-05-18T17:35:53.579196349, description=Bill Payment]
- Transaction[type=WITHDRAWAL, amount=70.00, timestamp=2025-05-18T17:35:53.579453984, description=Groceries]
The State monad (State<S, A>, StateKind, StateMonad) , as provided by higher-kinded-j, offers an elegant and functional way to manage state transformations.
By defining atomic state operations and composing them with map and flatMap, you can build complex stateful workflows that are easier to reason about, test, and maintain, as the state is explicitly managed by the monad's structure rather than through mutable side effects. The For comprehension helps simplify the workflow.
Key operations like get, set, modify, and inspect provide convenient ways to interact with the state within the monadic context.
For deeper exploration of the State monad and its applications:
Foundational Resources:
- Philip Wadler: Monads for functional programming - Classic paper introducing monads including State
- Cats Documentation: State Monad - Scala implementation with comprehensive examples
- Haskell Wiki: State Monad - Conceptual foundation and theory
Java-Focused Resources:
- Pierre-Yves Saumont: "Functional Programming in Java" (Manning, 2017) - Deep dive into functional techniques including state management
- Venkat Subramaniam: "Functional Programming in Java" (O'Reilly, 2014) - Practical guide to FP patterns in modern Java
The StreamMonad:
Lazy, Potentially Infinite Sequences with Functional Operations
- How to work with Streams as contexts for lazy, potentially infinite sequences
- Understanding Stream's single-use semantics and how to work with them
- Using
map,flatMap, andapfor lazy functional composition - Leveraging StreamOps utilities for common stream operations
- Building efficient data processing pipelines with monadic operations
- When to choose Stream over List for sequential processing
Purpose
The StreamMonad in the Higher-Kinded-J library provides a monadic interface for Java's standard java.util.stream.Stream. It allows developers to work with streams in a functional style, enabling operations like map, flatMap, and ap within the higher-kinded type system. This is particularly useful for processing sequences of data lazily, handling potentially infinite sequences, and composing stream operations in a type-safe manner.
Key benefits include:
- Lazy Evaluation: Operations are not performed until a terminal operation is invoked, allowing for efficient processing of large or infinite sequences.
- HKT Integration:
StreamKind(the higher-kinded wrapper forStream) andStreamMonadallowStreamto be used with generic functions and type classes expectingKind<F, A>,Functor<F>,Applicative<F>, orMonad<F>. - MonadZero Instance: Provides an empty stream via
zero(), useful for filtering and conditional logic. - Functional Composition: Easily chain operations on streams where each operation maintains laziness and allows composition of complex data transformations.
It implements MonadZero<StreamKind.Witness>, inheriting from Monad, Applicative, and Functor.
Java Streams have single-use semantics. Once a terminal operation has been performed on a stream (including operations that narrow and inspect the stream), that stream cannot be reused. Attempting to operate on a consumed stream throws IllegalStateException.
Best Practice: Create fresh stream instances for each operation sequence. Don't store and reuse Kind<StreamKind.Witness, A> instances after they've been consumed.
Structure
How to Use StreamMonad and StreamKind
Creating Instances
StreamKind<A> is the higher-kinded type representation for java.util.stream.Stream<A>. You create StreamKind instances using the StreamKindHelper utility class, the of method from StreamMonad, or the convenient factory methods in StreamOps.
STREAM.widen(Stream)
Converts a standard java.util.stream.Stream<A> into a Kind<StreamKind.Witness, A>.
Stream<String> stringStream = Stream.of("a", "b", "c");
Kind<StreamKind.Witness, String> streamKind1 = STREAM.widen(stringStream);
Stream<Integer> intStream = Stream.of(1, 2, 3);
Kind<StreamKind.Witness, Integer> streamKind2 = STREAM.widen(intStream);
Stream<Object> emptyStream = Stream.empty();
Kind<StreamKind.Witness, Object> streamKindEmpty = STREAM.widen(emptyStream);
Lifts a single value into the StreamKind context, creating a singleton stream. A null input value results in an empty StreamKind.
StreamMonad streamMonad = StreamMonad.INSTANCE;
Kind<StreamKind.Witness, String> streamKindOneItem = streamMonad.of("hello"); // Contains a stream with one element: "hello"
Kind<StreamKind.Witness, Integer> streamKindAnotherItem = streamMonad.of(42); // Contains a stream with one element: 42
Kind<StreamKind.Witness, Object> streamKindFromNull = streamMonad.of(null); // Contains an empty stream
Creates an empty StreamKind, useful for filtering operations or providing a "nothing" value in monadic computations.
StreamMonad streamMonad = StreamMonad.INSTANCE;
Kind<StreamKind.Witness, String> emptyStreamKind = streamMonad.zero(); // Empty stream
To get the underlying java.util.stream.Stream<A> from a Kind<StreamKind.Witness, A>, use STREAM.narrow():
Kind<StreamKind.Witness, String> streamKind = STREAM.widen(Stream.of("example"));
Stream<String> unwrappedStream = STREAM.narrow(streamKind); // Returns Stream containing "example"
// You can then perform terminal operations on the unwrapped stream
List<String> result = unwrappedStream.collect(Collectors.toList());
System.out.println(result); // [example]
The StreamOps utility class provides convenient factory methods for creating StreamKind instances:
// Create from varargs
Kind<StreamKind.Witness, Integer> numbers = fromArray(1, 2, 3, 4, 5);
// Create a range (exclusive end)
Kind<StreamKind.Witness, Integer> range = range(1, 11); // 1 through 10
// Create from collection
List<String> names = Arrays.asList("Alice", "Bob", "Charlie");
Kind<StreamKind.Witness, String> nameStream = fromIterable(names);
// Create empty stream
Kind<StreamKind.Witness, String> empty = empty();
Key Operations
The StreamMonad provides standard monadic operations, all maintaining lazy evaluation:
map(Function<A, B> f, Kind<StreamKind.Witness, A> fa):
Applies a function f to each element of the stream within fa, returning a new StreamKind containing the transformed elements. The transformation is lazy and won't execute until a terminal operation is performed.
StreamMonad streamMonad = StreamMonad.INSTANCE;
Kind<StreamKind.Witness, Integer> numbers = STREAM.widen(Stream.of(1, 2, 3));
Function<Integer, String> intToString = i -> "Number: " + i;
Kind<StreamKind.Witness, String> strings = streamMonad.map(intToString, numbers);
// At this point, no transformation has occurred yet (lazy)
// Terminal operation triggers execution:
List<String> result = STREAM.narrow(strings).collect(Collectors.toList());
System.out.println(result);
// Output: [Number: 1, Number: 2, Number: 3]
flatMap(Function<A, Kind<StreamKind.Witness, B>> f, Kind<StreamKind.Witness, A> ma):
Applies a function f to each element of the stream within ma. The function f itself returns a StreamKind<B>. flatMap then flattens all these resulting streams into a single StreamKind<B>. Evaluation remains lazy.
StreamMonad streamMonad = StreamMonad.INSTANCE;
Kind<StreamKind.Witness, Integer> initialValues = STREAM.widen(Stream.of(1, 2, 3));
// Function that takes an integer and returns a stream of itself and itself + 10
Function<Integer, Kind<StreamKind.Witness, Integer>> replicateAndAddTen =
i -> STREAM.widen(Stream.of(i, i + 10));
Kind<StreamKind.Witness, Integer> flattenedStream = streamMonad.flatMap(replicateAndAddTen, initialValues);
// Lazy - evaluation happens at terminal operation
List<Integer> result = STREAM.narrow(flattenedStream).collect(Collectors.toList());
System.out.println(result);
// Output: [1, 11, 2, 12, 3, 13]
// Example with conditional logic
Function<Integer, Kind<StreamKind.Witness, String>> toWordsIfEven =
i -> (i % 2 == 0) ?
STREAM.widen(Stream.of("even", String.valueOf(i))) :
streamMonad.zero(); // Empty stream for odd numbers
Kind<StreamKind.Witness, String> wordStream = streamMonad.flatMap(toWordsIfEven, initialValues);
List<String> words = STREAM.narrow(wordStream).collect(Collectors.toList());
System.out.println(words);
// Output: [even, 2]
ap(Kind<StreamKind.Witness, Function<A, B>> ff, Kind<StreamKind.Witness, A> fa):
Applies a stream of functions ff to a stream of values fa. This results in a new stream where each function from ff is applied to each value in fa (Cartesian product style). Evaluation remains lazy.
StreamMonad streamMonad = StreamMonad.INSTANCE;
Function<Integer, String> addPrefix = i -> "Val: " + i;
Function<Integer, String> multiplyAndString = i -> "Mul: " + (i * 2);
Kind<StreamKind.Witness, Function<Integer, String>> functions =
STREAM.widen(Stream.of(addPrefix, multiplyAndString));
Kind<StreamKind.Witness, Integer> values = STREAM.widen(Stream.of(10, 20));
Kind<StreamKind.Witness, String> appliedResults = streamMonad.ap(functions, values);
// Lazy - collects when terminal operation is performed
List<String> result = STREAM.narrow(appliedResults).collect(Collectors.toList());
System.out.println(result);
// Output: [Val: 10, Val: 20, Mul: 20, Mul: 40]
StreamOps Utility Documentation
The StreamOps class provides a rich set of static utility methods for working with StreamKind instances. These operations complement the monadic interface with practical stream manipulation functions.
Creation Operations
// Create from varargs
Kind<StreamKind.Witness, T> fromArray(T... elements)
// Create from Iterable
Kind<StreamKind.Witness, T> fromIterable(Iterable<T> iterable)
// Create a range [start, end)
Kind<StreamKind.Witness, Integer> range(int start, int end)
// Create empty stream
Kind<StreamKind.Witness, T> empty()
Examples:
Kind<StreamKind.Witness, String> names = fromArray("Alice", "Bob", "Charlie");
Kind<StreamKind.Witness, Integer> numbers = range(1, 101); // 1 to 100
Kind<StreamKind.Witness, String> emptyStream = empty();
Filtering and Selection
// Keep only elements matching predicate
Kind<StreamKind.Witness, A> filter(Predicate<A> predicate, Kind<StreamKind.Witness, A> stream)
// Take first n elements
Kind<StreamKind.Witness, A> take(long n, Kind<StreamKind.Witness, A> stream)
// Skip first n elements
Kind<StreamKind.Witness, A> drop(long n, Kind<StreamKind.Witness, A> stream)
Examples:
Kind<StreamKind.Witness, Integer> numbers = range(1, 101);
// Get only even numbers
Kind<StreamKind.Witness, Integer> evens = filter(n -> n % 2 == 0, numbers);
// Get first 10 elements
Kind<StreamKind.Witness, Integer> first10 = take(10, range(1, 1000));
// Skip first 5 elements
Kind<StreamKind.Witness, Integer> afterFirst5 = drop(5, range(1, 20));
Combination Operations
// Concatenate two streams sequentially
Kind<StreamKind.Witness, A> concat(Kind<StreamKind.Witness, A> stream1, Kind<StreamKind.Witness, A> stream2)
// Zip two streams element-wise with combiner function
Kind<StreamKind.Witness, C> zip(Kind<StreamKind.Witness, A> stream1, Kind<StreamKind.Witness, B> stream2, BiFunction<A, B, C> combiner)
// Pair each element with its index (starting from 0)
Kind<StreamKind.Witness, Tuple2<Integer, A>> zipWithIndex(Kind<StreamKind.Witness, A> stream)
Examples:
Kind<StreamKind.Witness, Integer> first = range(1, 4); // 1, 2, 3
Kind<StreamKind.Witness, Integer> second = range(10, 13); // 10, 11, 12
// Sequential concatenation
Kind<StreamKind.Witness, Integer> combined = concat(first, second);
// Result: 1, 2, 3, 10, 11, 12
// Element-wise combination
Kind<StreamKind.Witness, String> names = fromArray("Alice", "Bob", "Charlie");
Kind<StreamKind.Witness, Integer> ages = fromArray(25, 30, 35);
Kind<StreamKind.Witness, String> profiles = zip(names, ages,
(name, age) -> name + " is " + age);
// Result: "Alice is 25", "Bob is 30", "Charlie is 35"
// Index pairing
Kind<StreamKind.Witness, String> items = fromArray("apple", "banana", "cherry");
Kind<StreamKind.Witness, Tuple2<Integer, String>> indexed = zipWithIndex(items);
// Result: (0, "apple"), (1, "banana"), (2, "cherry")
Terminal Operations
// Collect to List
List<A> toList(Kind<StreamKind.Witness, A> stream)
// Collect to Set
Set<A> toSet(Kind<StreamKind.Witness, A> stream)
// Execute side effect for each element
void forEach(Consumer<A> action, Kind<StreamKind.Witness, A> stream)
Examples:
Kind<StreamKind.Witness, Integer> numbers = range(1, 6);
// Collect to List
List<Integer> numberList = toList(numbers); // [1, 2, 3, 4, 5]
// Collect to Set (removes duplicates)
Kind<StreamKind.Witness, String> words = fromArray("a", "b", "a", "c");
Set<String> uniqueWords = toSet(words); // {"a", "b", "c"}
// Execute side effects
Kind<StreamKind.Witness, String> messages = fromArray("Hello", "World");
forEach(System.out::println, messages);
// Prints:
// Hello
// World
Side Effects and Debugging
// Execute side effect for each element while passing through
Kind<StreamKind.Witness, A> tap(Consumer<A> action, Kind<StreamKind.Witness, A> stream)
Example:
List<String> log = new ArrayList<>();
Kind<StreamKind.Witness, Integer> pipeline = tap(
n -> log.add("Processing: " + n),
StreamMonad.INSTANCE.map(n -> n * 2, range(1, 4))
);
// Side effects haven't executed yet (lazy)
System.out.println("Log size: " + log.size()); // 0
// Terminal operation triggers execution
List<Integer> result = toList(pipeline);
System.out.println("Log size: " + log.size()); // 3
System.out.println("Log: " + log); // [Processing: 2, Processing: 4, Processing: 6]
System.out.println("Result: " + result); // [2, 4, 6]
Important Constraints: Single-Use Semantics
Unlike List or Optional, Java Streams can only be consumed once. This is a fundamental characteristic of java.util.stream.Stream that is preserved in the HKT representation.
What This Means:
- Once you perform a terminal operation on a stream (including
narrow()followed by collection), that stream is consumed - Attempting to reuse a consumed stream throws
IllegalStateException - Each
Kind<StreamKind.Witness, A>instance can only flow through one pipeline to completion
Correct Approach:
// Create fresh stream for each independent operation
Kind<StreamKind.Witness, Integer> stream1 = range(1, 4);
List<Integer> result1 = toList(stream1); // ✓ First use
Kind<StreamKind.Witness, Integer> stream2 = range(1, 4); // Create new stream
List<Integer> result2 = toList(stream2); // ✓ Second use with fresh stream
Incorrect Approach:
// DON'T DO THIS - Will throw IllegalStateException
Kind<StreamKind.Witness, Integer> stream = range(1, 4);
List<Integer> result1 = toList(stream); // ✓ First use
List<Integer> result2 = toList(stream); // ✗ ERROR: stream already consumed!
Design Implications:
- Don't store
StreamKindinstances in fields for reuse - Create streams on-demand when needed
- Use factory methods or suppliers to generate fresh streams
- Consider using
Listif you need to process data multiple times
Practical Example: Complete Usage
Here's a complete example demonstrating various Stream operations:
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.stream.StreamKind;
import org.higherkindedj.hkt.stream.StreamMonad;
import static org.higherkindedj.hkt.stream.StreamKindHelper.STREAM;
import static org.higherkindedj.hkt.stream.StreamOps.*;
import java.util.List;
import java.util.function.Function;
public class StreamUsageExample {
public static void main(String[] args) {
StreamMonad streamMonad = StreamMonad.INSTANCE;
// 1. Create a StreamKind using range
Kind<StreamKind.Witness, Integer> numbersKind = range(1, 11); // 1 through 10
// 2. Use map to transform (lazy)
Function<Integer, String> numberToString = n -> "Item-" + n;
Kind<StreamKind.Witness, String> stringsKind = streamMonad.map(numberToString, numbersKind);
System.out.println("Mapped: " + toList(stringsKind));
// Expected: [Item-1, Item-2, Item-3, ..., Item-10]
// 3. Create fresh stream for flatMap example
Kind<StreamKind.Witness, Integer> numbersKind2 = range(1, 6);
// flatMap: duplicate even numbers, skip odd numbers
Function<Integer, Kind<StreamKind.Witness, Integer>> duplicateIfEven = n -> {
if (n % 2 == 0) {
return fromArray(n, n); // Duplicate even numbers
} else {
return streamMonad.zero(); // Skip odd numbers
}
};
Kind<StreamKind.Witness, Integer> flatMappedKind = streamMonad.flatMap(duplicateIfEven, numbersKind2);
System.out.println("FlatMapped: " + toList(flatMappedKind));
// Expected: [2, 2, 4, 4]
// 4. Use of to create singleton
Kind<StreamKind.Witness, String> singleValueKind = streamMonad.of("hello world");
System.out.println("From 'of': " + toList(singleValueKind));
// Expected: [hello world]
// 5. Use zero to create empty stream
Kind<StreamKind.Witness, String> emptyKind = streamMonad.zero();
System.out.println("From 'zero': " + toList(emptyKind));
// Expected: []
// 6. StreamOps: filter and take
Kind<StreamKind.Witness, Integer> largeRange = range(1, 101);
Kind<StreamKind.Witness, Integer> evensFirst10 = take(10, filter(n -> n % 2 == 0, largeRange));
System.out.println("First 10 evens: " + toList(evensFirst10));
// Expected: [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
// 7. Zip two streams
Kind<StreamKind.Witness, String> names = fromArray("Alice", "Bob", "Charlie");
Kind<StreamKind.Witness, Integer> scores = fromArray(95, 87, 92);
Kind<StreamKind.Witness, String> results = zip(names, scores,
(name, score) -> name + ": " + score);
System.out.println("Results: " + toList(results));
// Expected: [Alice: 95, Bob: 87, Charlie: 92]
// 8. Demonstrating single-use constraint
Kind<StreamKind.Witness, Integer> streamOnce = range(1, 4);
List<Integer> firstUse = toList(streamOnce);
System.out.println("First use: " + firstUse);
// Expected: [1, 2, 3]
// Must create new stream for second use
Kind<StreamKind.Witness, Integer> streamTwice = range(1, 4);
List<Integer> secondUse = toList(streamTwice);
System.out.println("Second use (new stream): " + secondUse);
// Expected: [1, 2, 3]
}
}
When to Use StreamMonad
Choose StreamMonad when:
- Processing large datasets where lazy evaluation provides memory efficiency
- Working with potentially infinite sequences
- Building complex data transformation pipelines
- You need intermediate laziness and only want to materialise results at the end
- Single-pass processing is sufficient for your use case
Choose ListMonad instead when:
- You need to process the same data multiple times
- Random access to elements is required
- The entire dataset fits comfortably in memory
- You need to store the result for later reuse
Key Difference: List is eager and reusable; Stream is lazy and single-use.
The Trampoline Monad:
Stack-Safe Recursion in Java
- How to convert deeply recursive algorithms to stack-safe iterative ones
- Implementing mutually recursive functions without stack overflow
- Using
Trampoline.doneandTrampoline.deferto build trampolined computations - Composing recursive operations using
mapandflatMap - When to use Trampoline vs. traditional recursion
- Leveraging
TrampolineUtilsfor stack-safe applicative operations
For a comprehensive exploration of recursion, thunks, and trampolines in Java and Scala, see Scott Logic's blog post: Recursion, Thunks and Trampolines with Java and Scala.
In functional programming, recursion is a natural way to express iterative algorithms. However, the JVM's call stack has a limited depth, and deeply recursive computations can cause StackOverflowError. The JVM lacks tail-call optimisation, which means even tail-recursive functions will consume stack space.
The Trampoline<A> type in higher-kinded-j solves this problem by converting recursive calls into data structures that are evaluated iteratively. Instead of making recursive calls directly (which grow the call stack), you return a Trampoline value that describes the next step of the computation. The run() method then processes these steps in a loop, using constant stack space regardless of recursion depth.
Core Components
The Trampoline Structure
The HKT Bridge for Trampoline
Typeclasses for Trampoline
The Trampoline functionality is built upon several related components:
-
Trampoline<A>: The core sealed interface representing a stack-safe computation. It has three constructors:Done<A>: Represents a completed computation holding a final value.More<A>: Represents a suspended computation (deferred thunk) that will be evaluated later.FlatMap<A, B>: Represents a sequenced computation resulting from monadic bind operations.
-
TrampolineKind<A>: The HKT marker interface (Kind<TrampolineKind.Witness, A>) forTrampoline. This allowsTrampolineto be treated as a generic type constructorFin type classes likeFunctorandMonad. The witness type isTrampolineKind.Witness. -
TrampolineKindHelper: The essential utility class for working withTrampolinein the HKT simulation. It provides:widen(Trampoline<A>): Wraps a concreteTrampoline<A>instance into its HKT representationTrampolineKind<A>.narrow(Kind<TrampolineKind.Witness, A>): Unwraps aTrampolineKind<A>back to the concreteTrampoline<A>. ThrowsKindUnwrapExceptionif the input Kind is invalid.done(A value): Creates aTrampolineKind<A>representing a completed computation.defer(Supplier<Trampoline<A>> next): Creates aTrampolineKind<A>representing a deferred computation.run(Kind<TrampolineKind.Witness, A>): Executes the trampoline and returns the final result.
-
TrampolineFunctor: ImplementsFunctor<TrampolineKind.Witness>. Provides themapoperation to transform the result value of a trampoline computation. -
TrampolineMonad: ExtendsTrampolineFunctorand implementsMonad<TrampolineKind.Witness>. Providesof(to lift a pure value intoTrampoline) andflatMap(to sequence trampoline computations). -
TrampolineUtils: Utility class providing guaranteed stack-safe applicative operations:traverseListStackSafe: Stack-safe list traversal for any applicative.map2StackSafe: Stack-safe map2 for chaining many operations.sequenceStackSafe: Stack-safe sequence operation.
Purpose and Usage
- Stack Safety: Converts recursive calls into data structures processed iteratively, preventing
StackOverflowErroron deep recursion (verified with 100,000+ iterations). - Tail Call Optimisation: Effectively provides tail-call optimisation for Java, which lacks native support for it.
- Lazy Evaluation: Computations are not executed until
run()is explicitly called. - Composability: Trampolined computations can be chained using
mapandflatMap.
Key Methods:
Trampoline.done(value): Creates a completed computation with a final value.Trampoline.defer(supplier): Defers a computation by wrapping it in a supplier.trampoline.run(): Executes the trampoline iteratively and returns the final result.trampoline.map(f): Transforms the result without executing the trampoline.trampoline.flatMap(f): Sequences trampolines whilst maintaining stack safety.
The classic factorial function is a simple example of recursion. For large numbers, naive recursion will cause stack overflow:
import org.higherkindedj.hkt.trampoline.Trampoline;
import java.math.BigInteger;
public class FactorialExample {
// Naive recursive factorial - WILL OVERFLOW for large n
static BigInteger factorialNaive(BigInteger n) {
if (n.compareTo(BigInteger.ZERO) <= 0) {
return BigInteger.ONE;
}
return n.multiply(factorialNaive(n.subtract(BigInteger.ONE)));
}
// Stack-safe trampolined factorial - safe for very large n
static Trampoline<BigInteger> factorial(BigInteger n, BigInteger acc) {
if (n.compareTo(BigInteger.ZERO) <= 0) {
return Trampoline.done(acc);
}
// Instead of recursive call, return a deferred computation
return Trampoline.defer(() ->
factorial(n.subtract(BigInteger.ONE), n.multiply(acc))
);
}
public static void main(String[] args) {
// This would overflow: factorialNaive(BigInteger.valueOf(10000));
// This is stack-safe
BigInteger result = factorial(
BigInteger.valueOf(10000),
BigInteger.ONE
).run();
System.out.println("Factorial computed safely!");
System.out.println("Result has " + result.toString().length() + " digits");
}
}
Key Insight: Instead of making a direct recursive call (which pushes a new frame onto the call stack), we return Trampoline.defer(() -> ...) which creates a data structure. The run() method then evaluates these structures iteratively.
Mutually recursive functions are another classic case where stack overflow occurs easily:
import org.higherkindedj.hkt.trampoline.Trampoline;
public class MutualRecursionExample {
// Naive mutual recursion - WILL OVERFLOW for large n
static boolean isEvenNaive(int n) {
if (n == 0) return true;
return isOddNaive(n - 1);
}
static boolean isOddNaive(int n) {
if (n == 0) return false;
return isEvenNaive(n - 1);
}
// Stack-safe trampolined versions
static Trampoline<Boolean> isEven(int n) {
if (n == 0) return Trampoline.done(true);
return Trampoline.defer(() -> isOdd(n - 1));
}
static Trampoline<Boolean> isOdd(int n) {
if (n == 0) return Trampoline.done(false);
return Trampoline.defer(() -> isEven(n - 1));
}
public static void main(String[] args) {
// This would overflow: isEvenNaive(1000000);
// This is stack-safe
boolean result = isEven(1000000).run();
System.out.println("1000000 is even: " + result); // true
boolean result2 = isOdd(999999).run();
System.out.println("999999 is odd: " + result2); // true
}
}
Computing Fibonacci numbers recursively is inefficient and stack-unsafe. With trampolining, we achieve stack safety (though we'd still want memoisation for efficiency):
import org.higherkindedj.hkt.trampoline.Trampoline;
import java.math.BigInteger;
public class FibonacciExample {
// Stack-safe Fibonacci using tail recursion with accumulator
static Trampoline<BigInteger> fibonacci(int n, BigInteger a, BigInteger b) {
if (n == 0) return Trampoline.done(a);
if (n == 1) return Trampoline.done(b);
return Trampoline.defer(() ->
fibonacci(n - 1, b, a.add(b))
);
}
public static void main(String[] args) {
// Compute the 10,000th Fibonacci number - stack-safe!
BigInteger fib10000 = fibonacci(
10000,
BigInteger.ZERO,
BigInteger.ONE
).run();
System.out.println("Fibonacci(10000) has " +
fib10000.toString().length() + " digits");
}
}
Trampoline is a monad, so you can compose computations using map and flatMap:
import org.higherkindedj.hkt.trampoline.Trampoline;
public class TrampolineCompositionExample {
static Trampoline<Integer> countDown(int n) {
if (n <= 0) return Trampoline.done(0);
return Trampoline.defer(() -> countDown(n - 1));
}
public static void main(String[] args) {
// Use map to transform the result
Trampoline<String> countWithMessage = countDown(100000)
.map(result -> "Countdown complete! Final: " + result);
System.out.println(countWithMessage.run());
// Use flatMap to sequence trampolines
Trampoline<Integer> sequenced = countDown(50000)
.flatMap(first -> countDown(50000)
.map(second -> first + second));
System.out.println("Sequenced result: " + sequenced.run());
}
}
When traversing large collections with custom applicatives, use TrampolineUtils for guaranteed stack safety:
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.trampoline.TrampolineUtils;
import org.higherkindedj.hkt.id.*;
import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
public class TrampolineUtilsExample {
public static void main(String[] args) {
// Create a large list
List<Integer> largeList = IntStream.range(0, 100000)
.boxed()
.collect(Collectors.toList());
// Traverse it safely
Kind<IdKind.Witness, List<String>> result =
TrampolineUtils.traverseListStackSafe(
largeList,
i -> Id.of("item-" + i),
IdMonad.instance()
);
List<String> unwrapped = IdKindHelper.ID.narrow(result).value();
System.out.println("Traversed " + unwrapped.size() + " elements safely");
}
}
See TrampolineUtils documentation for more details on stack-safe applicative operations.
When to Use Trampoline
Use Trampoline when:
- Deep Recursion: Processing data structures or algorithms that recurse deeply (>1,000 levels).
- Tail Recursion: Converting tail-recursive algorithms that would otherwise overflow.
- Mutual Recursion: Implementing mutually recursive functions.
- Stack Safety Guarantee: When you absolutely must prevent
StackOverflowError. - Large Collections: When using
TrampolineUtilsto traverse large collections (>10,000 elements) with custom applicatives.
Avoid Trampoline when:
- Shallow Recursion: For recursion depth <1,000, the overhead isn't justified.
- Performance Critical: Trampoline adds overhead compared to direct recursion or iteration.
- Simple Iteration: If you can write a simple loop, that's usually clearer and faster.
- Standard Collections: For standard applicatives (Id, Optional, Either, etc.) on moderate-sized lists (<10,000 elements), regular traverse is sufficient.
Performance Characteristics
- Stack Space: O(1) - constant stack space regardless of recursion depth
- Heap Space: O(n) - creates data structures for deferred computations
- Time Overhead: Small constant overhead per recursive step compared to direct recursion
- Throughput: Slower than native tail-call optimisation (if it existed in Java) but faster than stack overflow recovery
Benchmarks: The implementation has been verified to handle:
- 100,000+ iterations in factorial computations
- 1,000,000+ iterations in mutual recursion (isEven/isOdd)
- 100,000+ element list traversals (via
TrampolineUtils)
Implementation Notes
The run() method uses an iterative algorithm with an explicit continuation stack (implemented with ArrayDeque) to process the trampoline structure. This algorithm:
- Starts with the current trampoline
- If it's
More, unwraps it and continues - If it's
FlatMap, pushes the function onto the stack and processes the sub-computation - If it's
Done, applies any pending continuations from the stack - Repeats until there are no more continuations and we have a final
Donevalue
This design ensures that regardless of how deeply nested the recursive calls were in the original algorithm, the execution happens in constant stack space.
Type Safety Considerations
The implementation uses a Continuation wrapper to safely handle heterogeneous types on the continuation stack. This design confines the necessary unsafe cast to a single, controlled location in the code, making the type erasure explicit, documented, and verified to be safe.
Summary
The Trampoline monad provides a practical solution to Java's lack of tail-call optimisation. By converting recursive algorithms into trampolined form, you can:
- Write naturally recursive code that's guaranteed stack-safe
- Compose recursive computations functionally using
mapandflatMap - Leverage
TrampolineUtilsfor stack-safe applicative operations on large collections - Maintain clarity and correctness whilst preventing
StackOverflowError
For detailed implementation examples and more advanced use cases, see the TrampolineExample.java in the examples module.
The Free Monad:
Building Composable DSLs and Interpreters
- How to build domain-specific languages (DSLs) as data structures
- Separating program description from execution
- Creating multiple interpreters for the same program
- Using
pure,suspend, andliftFto construct Free programs - Implementing stack-safe interpreters with
foldMap - When Free monads solve real architectural problems
- Comparing Free monads with traditional Java patterns
- ConsoleProgram.java
- FreeMonadTest.java
- FreeFactoryTest.java - Demonstrates improved type inference with FreeFactory
For deeper exploration of Free monads and their applications:
- Gabriel Gonzalez: Why free monads matter - An intuitive introduction to the concept
- Runar Bjarnason: Stackless Scala With Free Monads - Stack-safe execution patterns
- Cats Documentation: Free Monad - Scala implementation and examples
- John A De Goes: Modern Functional Programming (Part 2) - Practical applications in real systems
Purpose
In traditional Java programming, when you want to execute side effects (like printing to the console, reading files, or making database queries), you directly execute them:
// Traditional imperative approach
System.out.println("What is your name?");
String name = scanner.nextLine();
System.out.println("Hello, " + name + "!");
This approach tightly couples what you want to do with how it's done. The Free monad provides a fundamentally different approach: instead of executing effects immediately, you build programs as data structures that can be interpreted in different ways.
Think of it like writing a recipe (the data structure) versus actually cooking the meal (the execution). The recipe can be:
- Executed in a real kitchen (production)
- Simulated for testing
- Optimised before cooking
- Translated to different cuisines
The Free monad enables this separation in functional programming. A Free<F, A> represents a program built from instructions of type F that, when interpreted, will produce a value of type A.
Key Benefits
- Testability: Write pure tests without actual side effects. Test database code without a database.
- Multiple Interpretations: One program, many interpreters (production, testing, logging, optimisation).
- Composability: Build complex programs from simple, reusable building blocks.
- Inspection: Programs are data, so you can analyse, optimise, or transform them before execution.
- Stack Safety: Interpretation uses constant stack space, preventing
StackOverflowError.
Comparison with Traditional Java Patterns
If you're familiar with the Strategy pattern, Free monads extend this concept:
Strategy Pattern: Choose algorithm at runtime
interface PaymentStrategy {
void pay(int amount);
}
// Pick one: creditCardStrategy, payPalStrategy, etc.
Free Monad: Build an entire program as data, then pick how to execute it
Free<PaymentOp, Receipt> program = ...;
// Pick interpreter: realPayment, testPayment, loggingPayment, etc.
Similarly, the Command pattern encapsulates actions as objects:
Command Pattern: Single action as object
interface Command {
void execute();
}
Free Monad: Entire workflows with sequencing, branching, and composition
Free<Command, Result> workflow =
sendEmail(...)
.flatMap(receipt -> saveToDatabase(...))
.flatMap(id -> sendNotification(...));
// Interpret with real execution or test mock
Core Components
The Free Structure
The HKT Bridge for Free
Type Classes for Free
The Free functionality is built upon several related components:
-
Free<F, A>: The core sealed interface representing a program. It has three constructors:Pure<F, A>: Represents a terminal value—the final result.Suspend<F, A>: Represents a suspended computation—an instructionKind<F, Free<F, A>>to be interpreted later.FlatMapped<F, X, A>: Represents monadic sequencing—chains computations together in a stack-safe manner.
-
FreeKind<F, A>: The HKT marker interface (Kind<FreeKind.Witness<F>, A>) forFree. This allowsFreeto be treated as a generic type constructor in type classes. The witness type isFreeKind.Witness<F>, whereFis the instruction set functor. -
FreeKindHelper: The essential utility class for working withFreein the HKT simulation. It provides:widen(Free<F, A>): Wraps a concreteFree<F, A>instance into its HKT representation.narrow(Kind<FreeKind.Witness<F>, A>): Unwraps aFreeKind<F, A>back to the concreteFree<F, A>.
-
FreeFunctor<F>: ImplementsFunctor<FreeKind.Witness<F>>. Provides themapoperation to transform result values. -
FreeMonad<F>: ExtendsFreeFunctor<F>and implementsMonad<FreeKind.Witness<F>>. Providesof(to lift a pure value) andflatMap(to sequence Free computations).
Purpose and Usage
- Building DSLs: Create domain-specific languages as composable data structures.
- Natural Transformations: Write interpreters as transformations from your instruction set
Fto a target monadM. - Stack-Safe Execution: The
foldMapmethod uses Higher-Kinded-J's ownTrampolinemonad internally, demonstrating the library's composability whilst preventing stack overflow. - Multiple Interpreters: Execute the same program with different interpreters (production vs. testing vs. logging).
- Programme Inspection: Since programs are data, you can analyse, optimise, or transform them before execution.
Key Methods:
Free.pure(value): Creates a terminal computation holding a final value.Free.suspend(computation): Suspends a computation for later interpretation.Free.liftF(fa, functor): Lifts a functor value into a Free monad.free.map(f): Transforms the result value without executing.free.flatMap(f): Sequences Free computations whilst maintaining stack safety.free.foldMap(transform, monad): Interprets the Free program using a natural transformation.
FreeFactory for Improved Type Inference:
Java's type inference can struggle when chaining operations directly on Free.pure():
// This fails to compile - Java can't infer F
Free<IdKind.Witness, Integer> result = Free.pure(2).map(x -> x * 2); // ERROR
// Workaround: explicit type parameters (verbose)
Free<IdKind.Witness, Integer> result = Free.<IdKind.Witness, Integer>pure(2).map(x -> x * 2);
The FreeFactory<F> class solves this by capturing the functor type parameter once:
// Create a factory with your functor type
FreeFactory<IdKind.Witness> FREE = FreeFactory.of();
// or with a monad instance for clarity:
FreeFactory<IdKind.Witness> FREE = FreeFactory.withMonad(IdMonad.instance());
// Now type inference works perfectly
Free<IdKind.Witness, Integer> result = FREE.pure(2).map(x -> x * 2); // Works!
// Chain operations fluently
Free<IdKind.Witness, Integer> program = FREE.pure(10)
.map(x -> x + 1)
.flatMap(x -> FREE.pure(x * 2))
.map(x -> x - 5);
// Other factory methods
Free<F, A> pure = FREE.pure(value);
Free<F, A> suspended = FREE.suspend(computation);
Free<F, A> lifted = FREE.liftF(fa, functor);
FreeFactory is particularly useful in:
- Test code where you build many Free programmes
- DSL implementations where type inference is important
- Any code that chains
map/flatMapoperations onFree.pure()
Let's build a simple DSL for console interactions. We'll define instructions, build programs, and create multiple interpreters.
Step 1: Define Your Instruction Set
First, create a sealed interface representing all possible operations in your DSL:
public sealed interface ConsoleOp<A> {
record PrintLine(String text) implements ConsoleOp<Unit> {}
record ReadLine() implements ConsoleOp<String> {}
}
public record Unit() {
public static final Unit INSTANCE = new Unit();
}
This is your vocabulary. PrintLine returns Unit (like void), ReadLine returns String.
Step 2: Create HKT Bridge for Your DSL
To use your DSL with the Free monad, you need the HKT simulation components:
public interface ConsoleOpKind<A> extends Kind<ConsoleOpKind.Witness, A> {
final class Witness {
private Witness() {}
}
}
public enum ConsoleOpKindHelper {
CONSOLE;
record ConsoleOpHolder<A>(ConsoleOp<A> op) implements ConsoleOpKind<A> {}
public <A> Kind<ConsoleOpKind.Witness, A> widen(ConsoleOp<A> op) {
return new ConsoleOpHolder<>(op);
}
public <A> ConsoleOp<A> narrow(Kind<ConsoleOpKind.Witness, A> kind) {
return ((ConsoleOpHolder<A>) kind).op();
}
}
Step 3: Create a Functor for Your DSL
The Free monad requires a Functor for your instruction set:
public class ConsoleOpFunctor implements Functor<ConsoleOpKind.Witness> {
private static final ConsoleOpKindHelper CONSOLE = ConsoleOpKindHelper.CONSOLE;
@Override
public <A, B> Kind<ConsoleOpKind.Witness, B> map(
Function<? super A, ? extends B> f,
Kind<ConsoleOpKind.Witness, A> fa) {
ConsoleOp<A> op = CONSOLE.narrow(fa);
// For immutable operations, mapping is identity
// (actual mapping happens during interpretation)
return (Kind<ConsoleOpKind.Witness, B>) fa;
}
}
Step 4: Create DSL Helper Functions
Provide convenient methods for building Free programs:
public class ConsoleOps {
/** Prints a line to the console. */
public static Free<ConsoleOpKind.Witness, Unit> printLine(String text) {
ConsoleOp<Unit> op = new ConsoleOp.PrintLine(text);
Kind<ConsoleOpKind.Witness, Unit> kindOp =
ConsoleOpKindHelper.CONSOLE.widen(op);
return Free.liftF(kindOp, new ConsoleOpFunctor());
}
/** Reads a line from the console. */
public static Free<ConsoleOpKind.Witness, String> readLine() {
ConsoleOp<String> op = new ConsoleOp.ReadLine();
Kind<ConsoleOpKind.Witness, String> kindOp =
ConsoleOpKindHelper.CONSOLE.widen(op);
return Free.liftF(kindOp, new ConsoleOpFunctor());
}
/** Pure value in the Free monad. */
public static <A> Free<ConsoleOpKind.Witness, A> pure(A value) {
return Free.pure(value);
}
}
Now you can build programs using familiar Java syntax:
Free<ConsoleOpKind.Witness, Unit> program =
ConsoleOps.printLine("What is your name?")
.flatMap(ignored ->
ConsoleOps.readLine()
.flatMap(name ->
ConsoleOps.printLine("Hello, " + name + "!")));
Key Insight: At this point, nothing has executed. You've built a data structure describing what should happen.
The Free monad supports map and flatMap, making it easy to compose programs:
import static org.higherkindedj.example.free.ConsoleProgram.ConsoleOps.*;
// Simple sequence
Free<ConsoleOpKind.Witness, String> getName =
printLine("Enter your name:")
.flatMap(ignored -> readLine());
// Using map to transform results
Free<ConsoleOpKind.Witness, String> getUpperName =
getName.map(String::toUpperCase);
// Building complex workflows
Free<ConsoleOpKind.Witness, Unit> greetingWorkflow =
printLine("Welcome to the application!")
.flatMap(ignored -> getName)
.flatMap(name -> printLine("Hello, " + name + "!"))
.flatMap(ignored -> printLine("Have a great day!"));
// Calculator example with error handling
Free<ConsoleOpKind.Witness, Unit> calculator =
printLine("Enter first number:")
.flatMap(ignored1 -> readLine())
.flatMap(num1 ->
printLine("Enter second number:")
.flatMap(ignored2 -> readLine())
.flatMap(num2 -> {
try {
int sum = Integer.parseInt(num1) + Integer.parseInt(num2);
return printLine("Sum: " + sum);
} catch (NumberFormatException e) {
return printLine("Invalid numbers!");
}
}));
Composability: Notice how we can build getName once and reuse it in multiple programmes. This promotes code reuse and testability.
Now let's create an interpreter that actually executes console operations:
public class IOInterpreter {
private final Scanner scanner = new Scanner(System.in);
public <A> A run(Free<ConsoleOpKind.Witness, A> program) {
// Create a natural transformation from ConsoleOp to IO
Function<Kind<ConsoleOpKind.Witness, ?>, Kind<IOKind.Witness, ?>> transform =
kind -> {
ConsoleOp<?> op = ConsoleOpKindHelper.CONSOLE.narrow(
(Kind<ConsoleOpKind.Witness, Object>) kind);
// Execute the instruction and wrap result in Free.pure
Free<ConsoleOpKind.Witness, ?> freeResult = switch (op) {
case ConsoleOp.PrintLine print -> {
System.out.println(print.text());
yield Free.pure(Unit.INSTANCE);
}
case ConsoleOp.ReadLine read -> {
String line = scanner.nextLine();
yield Free.pure(line);
}
};
// Wrap the Free result in the target monad (IO)
return IOKindHelper.IO.widen(new IO<>(freeResult));
};
// Interpret the program using foldMap
Kind<IOKind.Witness, A> result = program.foldMap(transform, new IOMonad());
return IOKindHelper.IO.narrow(result).value();
}
}
// Simple IO type for the interpreter
record IO<A>(A value) {}
// Run the program
IOInterpreter interpreter = new IOInterpreter();
interpreter.run(greetingProgram());
// Actual console interaction happens here!
Natural Transformation: The transform function is a natural transformation—it converts each ConsoleOp instruction into an IO operation whilst preserving structure.
Critical Detail: Notice we wrap instruction results in Free.pure(). This is essential—the natural transformation receives Kind<F, Free<F, A>> and must return Kind<M, Free<F, A>>, not just the raw result.
One of the most powerful aspects of Free monads is testability. Create a test interpreter that doesn't perform real I/O:
public class TestInterpreter {
private final List<String> input;
private final List<String> output = new ArrayList<>();
private int inputIndex = 0;
public TestInterpreter(List<String> input) {
this.input = input;
}
public <A> A run(Free<ConsoleOpKind.Witness, A> program) {
// Create natural transformation to TestResult
Function<Kind<ConsoleOpKind.Witness, ?>, Kind<TestResultKind.Witness, ?>> transform =
kind -> {
ConsoleOp<?> op = ConsoleOpKindHelper.CONSOLE.narrow(
(Kind<ConsoleOpKind.Witness, Object>) kind);
// Simulate the instruction
Free<ConsoleOpKind.Witness, ?> freeResult = switch (op) {
case ConsoleOp.PrintLine print -> {
output.add(print.text());
yield Free.pure(Unit.INSTANCE);
}
case ConsoleOp.ReadLine read -> {
String line = inputIndex < input.size()
? input.get(inputIndex++)
: "";
yield Free.pure(line);
}
};
return TestResultKindHelper.TEST.widen(new TestResult<>(freeResult));
};
Kind<TestResultKind.Witness, A> result =
program.foldMap(transform, new TestResultMonad());
return TestResultKindHelper.TEST.narrow(result).value();
}
public List<String> getOutput() {
return output;
}
}
// Pure test - no actual I/O!
@Test
void testGreetingProgram() {
TestInterpreter interpreter = new TestInterpreter(List.of("Alice"));
interpreter.run(Programs.greetingProgram());
List<String> output = interpreter.getOutput();
assertEquals(2, output.size());
assertEquals("What is your name?", output.get(0));
assertEquals("Hello, Alice!", output.get(1));
}
Testability: The same greetingProgram() can be tested without any actual console I/O. You control inputs and verify outputs deterministically.
The real power emerges when building complex programmes from simple, reusable pieces:
// Reusable building blocks
Free<ConsoleOpKind.Witness, String> askQuestion(String question) {
return printLine(question)
.flatMap(ignored -> readLine());
}
Free<ConsoleOpKind.Witness, Unit> confirmAction(String action) {
return printLine(action + " - Are you sure? (yes/no)")
.flatMap(ignored -> readLine())
.flatMap(response ->
response.equalsIgnoreCase("yes")
? printLine("Confirmed!")
: printLine("Cancelled."));
}
// Composed programme
Free<ConsoleOpKind.Witness, Unit> userRegistration() {
return askQuestion("Enter username:")
.flatMap(username ->
askQuestion("Enter email:")
.flatMap(email ->
confirmAction("Register user " + username)
.flatMap(ignored ->
printLine("Registration complete for " + username))));
}
// Even more complex composition
Free<ConsoleOpKind.Witness, List<String>> gatherMultipleInputs(int count) {
Free<ConsoleOpKind.Witness, List<String>> start = Free.pure(new ArrayList<>());
for (int i = 0; i < count; i++) {
final int index = i;
start = start.flatMap(list ->
askQuestion("Enter item " + (index + 1) + ":")
.map(item -> {
list.add(item);
return list;
}));
}
return start;
}
Modularity: Each function returns a Free programme that can be:
- Tested independently
- Composed with others
- Interpreted in different ways
- Reused across your application
The liftF method provides a convenient way to lift single functor operations into Free:
// Instead of manually creating Suspend
Free<ConsoleOpKind.Witness, String> createManualReadLine() {
ConsoleOp<String> op = new ConsoleOp.ReadLine();
Kind<ConsoleOpKind.Witness, String> kindOp =
ConsoleOpKindHelper.CONSOLE.widen(op);
return Free.suspend(
new ConsoleOpFunctor().map(Free::pure, kindOp)
);
}
// Using liftF (simpler!)
Free<ConsoleOpKind.Witness, String> createLiftedReadLine() {
ConsoleOp<String> op = new ConsoleOp.ReadLine();
Kind<ConsoleOpKind.Witness, String> kindOp =
ConsoleOpKindHelper.CONSOLE.widen(op);
return Free.liftF(kindOp, new ConsoleOpFunctor());
}
// Even simpler with helper method
Free<ConsoleOpKind.Witness, String> simpleReadLine =
ConsoleOps.readLine();
Best Practice: Create helper methods (like ConsoleOps.readLine()) that use liftF internally. This provides a clean API for building programmes.
When to Use Free Monad
Use Free Monad When:
-
Building DSLs: You need a domain-specific language for your problem domain (financial calculations, workflow orchestration, build systems, etc.).
-
Multiple Interpretations: The same logic needs different execution modes:
- Production (real database, real network)
- Testing (mocked, pure)
- Logging (record all operations)
- Optimisation (analyse before execution)
- Dry-run (validate without executing)
-
Testability is Critical: You need to test complex logic without actual side effects. Example: testing database transactions without a database.
-
Programme Analysis: You need to inspect, optimise, or transform programmes before execution:
- Query optimisation
- Batch operations
- Caching strategies
- Cost analysis
-
Separation of Concerns: Business logic must be decoupled from execution details. Example: workflow definition separate from workflow engine.
-
Stack Safety Required: Your DSL involves deep recursion or many sequential operations (verified with 10,000+ operations).
Avoid Free Monad When:
-
Simple Effects: For straightforward side effects, use
IO,Reader, orStatedirectly. Free adds unnecessary complexity. -
Performance Critical: Free monads have overhead:
- Heap allocation for programme structure
- Interpretation overhead
- Not suitable for hot paths or tight loops
-
Single Interpretation: If you only ever need one way to execute your programme, traditional imperative code or simpler monads are clearer.
-
Team Unfamiliarity: Free monads require understanding of:
- Algebraic data types
- Natural transformations
- Monadic composition
If your team isn't comfortable with these concepts, simpler patterns might be more maintainable.
-
Small Scale: For small scripts or simple applications, the architectural benefits don't justify the complexity.
Comparison with Alternatives
Free Monad vs. Direct Effects:
- Free: Testable, multiple interpreters, programme inspection
- Direct: Simpler, better performance, easier to understand
Free Monad vs. Tagless Final:
- Free: Programmes are data structures, can be inspected
- Tagless Final: Better performance, less boilerplate, but programmes aren't inspectable
Free Monad vs. Effect Systems (like ZIO/Cats Effect):
- Free: Simpler concept, custom DSLs
- Effect Systems: More powerful, better performance, ecosystem support
Advanced Topics
Free Applicative vs. Free Monad
The Free Applicative is a related but distinct structure:
// Free Monad: Sequential, dependent operations
Free<F, C> sequential =
operationA() // A
.flatMap(a -> // depends on A
operationB(a) // B
.flatMap(b -> // depends on B
operationC(a, b))); // C
// Free Applicative: Independent, parallel operations
Applicative<F, C> parallel =
map3(
operationA(), // A (independent)
operationB(), // B (independent)
operationC(), // C (independent)
(a, b, c) -> combine(a, b, c)
);
When to use Free Applicative:
- Operations are independent and can run in parallel
- You want to analyse all operations upfront (batch database queries, parallel API calls)
- Optimisation: Can reorder, batch, or parallelise operations
When to use Free Monad:
- Operations are dependent on previous results
- Need full monadic sequencing power
- Building workflows with conditional logic
Example: Fetching data from multiple independent sources
// Free Applicative can batch these into a single round-trip
Applicative<DatabaseQuery, Report> report =
map3(
fetchUsers(), // Independent
fetchOrders(), // Independent
fetchProducts(), // Independent
(users, orders, products) -> generateReport(users, orders, products)
);
// Interpreter can optimise: "SELECT * FROM users, orders, products"
Coyoneda Optimisation
The Coyoneda lemma states that every type constructor can be made into a Functor. This allows Free monads to work with non-functor instruction sets:
// Without Coyoneda: instruction set must be a Functor
public sealed interface DatabaseOp<A> {
record Query(String sql) implements DatabaseOp<ResultSet> {}
record Update(String sql) implements DatabaseOp<Integer> {}
}
// Must implement Functor<DatabaseOp> - can be tedious!
// With Coyoneda: automatic functor lifting
class Coyoneda<F, A> {
Kind<F, Object> fa;
Function<Object, A> f;
static <F, A> Coyoneda<F, A> lift(Kind<F, A> fa) {
return new Coyoneda<>(fa, Function.identity());
}
}
// Now you can use any F without writing a Functor instance!
Free<Coyoneda<DatabaseOp, ?>, Result> programme = ...;
Benefits:
- Less boilerplate (no manual Functor implementation)
- Works with any instruction set
- Trade-off: Slightly more complex interpretation
When to use: Large DSLs where writing Functor instances for every instruction type is burdensome.
Tagless Final Style (Alternative Approach)
An alternative to Free monads is the Tagless Final encoding:
// Free Monad approach
sealed interface ConsoleOp<A> { ... }
Free<ConsoleOp, Result> programme = ...;
// Tagless Final approach
interface Console<F> {
Kind<F, Unit> printLine(String text);
Kind<F, String> readLine();
}
<F> Kind<F, Unit> programme(Console<F> console, Monad<F> monad) {
Kind<F, Unit> printName = console.printLine("What is your name?");
Kind<F, String> readName = monad.flatMap(ignored -> console.readLine(), printName);
return monad.flatMap(name -> console.printLine("Hello, " + name + "!"), readName);
}
// Different interpreters
Kind<IO.Witness, Unit> prod = programme(ioConsole, ioMonad);
Kind<Test.Witness, Unit> test = programme(testConsole, testMonad);
Tagless Final vs. Free Monad:
| Aspect | Free Monad | Tagless Final |
|---|---|---|
| Programmes | Data structures | Abstract functions |
| Inspection | ✅ Can analyse before execution | ❌ Cannot inspect |
| Performance | Slower (interpretation overhead) | Faster (direct execution) |
| Boilerplate | More (HKT bridges, helpers) | Less (just interfaces) |
| Flexibility | ✅ Multiple interpreters, transformations | ✅ Multiple interpreters |
| Learning Curve | Steeper | Moderate |
When to use Tagless Final:
- Performance matters
- Don't need programme inspection
- Prefer less boilerplate
When to use Free Monad:
- Need to analyse/optimise programmes before execution
- Want programmes as first-class values
- Building complex DSLs with transformations
Performance Characteristics
Understanding the performance trade-offs of Free monads is crucial for production use:
Stack Safety: O(1) stack space regardless of programme depth
- Uses Higher-Kinded-J's
Trampolinemonad internally forfoldMap - Demonstrates library composability: Free uses Trampoline for stack safety
- Verified with 10,000+ sequential operations without stack overflow
Heap Allocation: O(n) where n is programme size
- Each
flatMapcreates aFlatMappednode - Each
suspendcreates aSuspendnode - Consideration: For very large programmes (millions of operations), this could be significant
Interpretation Time: O(n) where n is programme size
- Each operation must be pattern-matched and interpreted
- Additional indirection compared to direct execution
- Rough estimate: 2-10x slower than direct imperative code (depends on interpreter complexity)
Optimisation Strategies:
-
Batch Operations: Accumulate independent operations and execute in bulk
// Instead of 1000 individual database inserts Free<DB, Unit> manyInserts = ...; // Batch into single multi-row insert interpreter.optimise(programme); // Detects pattern, batches -
Fusion: Combine consecutive
mapoperationsprogramme.map(f).map(g).map(h) // Optimiser fuses to: programme.map(f.andThen(g).andThen(h)) -
Short-Circuiting: Detect early termination
// If programme returns early, skip remaining operations -
Caching: Memoize pure computations
// Cache results of expensive pure operations
Benchmarks (relative to direct imperative code):
- Simple programmes (< 100 operations): 2-3x slower
- Complex programmes (1000+ operations): 3-5x slower
- With optimisation: Can approach parity for batch operations
Implementation Notes
The foldMap method leverages Higher-Kinded-J's own Trampoline monad to ensure stack-safe execution. This elegant design demonstrates that the library's abstractions are practical and composable:
public <M> Kind<M, A> foldMap(
Function<Kind<F, ?>, Kind<M, ?>> transform,
Monad<M> monad) {
// Delegate to Trampoline for stack-safe execution
return interpretFree(this, transform, monad).run();
}
private static <F, M, A> Trampoline<Kind<M, A>> interpretFree(
Free<F, A> free,
Function<Kind<F, ?>, Kind<M, ?>> transform,
Monad<M> monad) {
return switch (free) {
case Pure<F, A> pure ->
// Terminal case: lift the pure value into the target monad
Trampoline.done(monad.of(pure.value()));
case Suspend<F, A> suspend -> {
// Transform the suspended computation and recursively interpret
Kind<M, Free<F, A>> transformed =
(Kind<M, Free<F, A>>) transform.apply(suspend.computation());
yield Trampoline.done(
monad.flatMap(
innerFree -> interpretFree(innerFree, transform, monad).run(),
transformed));
}
case FlatMapped<F, ?, A> flatMapped -> {
// Handle FlatMapped by deferring the interpretation
FlatMapped<F, Object, A> fm = (FlatMapped<F, Object, A>) flatMapped;
yield Trampoline.defer(() ->
interpretFree(fm.sub(), transform, monad)
.map(kindOfX ->
monad.flatMap(
x -> {
Free<F, A> next = fm.continuation().apply(x);
return interpretFree(next, transform, monad).run();
},
kindOfX)));
}
};
}
Key Design Decisions:
-
Trampoline Integration: Uses
Trampoline.done()for terminal cases andTrampoline.defer()for recursive cases, ensuring stack safety. -
Library Composability: Demonstrates that Higher-Kinded-J's abstractions are practical—Free monad uses Trampoline internally.
-
Pattern Matching: Uses sealed interface with switch expressions for type-safe case handling.
-
Separation of Concerns: Trampoline handles stack safety; Free handles DSL interpretation.
-
Type Safety: Uses careful casting to maintain type safety whilst leveraging Trampoline's proven stack-safe execution.
Benefits of Using Trampoline:
- Single source of truth for stack-safe recursion
- Proven implementation with 100% test coverage
- Elegant demonstration of library cohesion
- Improvements to Trampoline automatically benefit Free monad
Comparison with Traditional Java Patterns
Let's see how Free monads compare to familiar Java patterns:
Strategy Pattern
Traditional Strategy:
interface SortStrategy {
void sort(List<Integer> list);
}
class QuickSort implements SortStrategy { ... }
class MergeSort implements SortStrategy { ... }
// Choose algorithm at runtime
SortStrategy strategy = useQuickSort ? new QuickSort() : new MergeSort();
strategy.sort(myList);
Free Monad Equivalent:
sealed interface SortOp<A> {
record Compare(int i, int j) implements SortOp<Boolean> {}
record Swap(int i, int j) implements SortOp<Unit> {}
}
Free<SortOp, Unit> quickSort(List<Integer> list) {
// Build programme as data
return ...;
}
// Multiple interpreters
interpreter1.run(programme); // In-memory sort
interpreter2.run(programme); // Log operations
interpreter3.run(programme); // Visualise algorithm
Advantage of Free: The entire algorithm is a data structure that can be inspected, optimised, or visualised.
Command Pattern
Traditional Command:
interface Command {
void execute();
}
class SendEmailCommand implements Command { ... }
class SaveToDBCommand implements Command { ... }
List<Command> commands = List.of(
new SendEmailCommand(...),
new SaveToDBCommand(...)
);
commands.forEach(Command::execute);
Free Monad Equivalent:
sealed interface AppOp<A> {
record SendEmail(String to, String body) implements AppOp<Receipt> {}
record SaveToDB(Data data) implements AppOp<Id> {}
}
Free<AppOp, Result> workflow =
sendEmail("user@example.com", "Welcome!")
.flatMap(receipt -> saveToDatabase(receipt))
.flatMap(id -> sendNotification(id));
// One programme, many interpreters
productionInterpreter.run(workflow); // Real execution
testInterpreter.run(workflow); // Pure testing
loggingInterpreter.run(workflow); // Audit trail
Advantage of Free: Commands compose with flatMap, results flow between commands, and you get multiple interpreters for free.
Observer Pattern
Traditional Observer:
interface Observer {
void update(Event event);
}
class Logger implements Observer { ... }
class Notifier implements Observer { ... }
subject.registerObserver(logger);
subject.registerObserver(notifier);
subject.notifyObservers(event);
Free Monad Equivalent:
sealed interface EventOp<A> {
record Emit(Event event) implements EventOp<Unit> {}
record React(Event event) implements EventOp<Unit> {}
}
Free<EventOp, Unit> eventStream =
emit(userLoggedIn)
.flatMap(ignored -> emit(pageViewed))
.flatMap(ignored -> emit(itemPurchased));
// Different observation strategies
loggingInterpreter.run(eventStream); // Log to file
analyticsInterpreter.run(eventStream); // Send to analytics
testInterpreter.run(eventStream); // Collect for assertions
Advantage of Free: Event streams are first-class values that can be composed, transformed, and replayed.
Summary
The Free monad provides a powerful abstraction for building domain-specific languages in Java:
- Separation of Concerns: Programme description (data) vs. execution (interpreters)
- Testability: Pure testing without actual side effects
- Flexibility: Multiple interpreters for the same programme
- Stack Safety: Handles deep recursion without stack overflow (verified with 10,000+ operations)
- Composability: Build complex programmes from simple building blocks
When to use:
- Building DSLs
- Need multiple interpretations
- Testability is critical
- Programme analysis/optimisation required
When to avoid:
- Performance-critical code
- Simple, single-interpretation effects
- Team unfamiliar with advanced functional programming
For detailed implementation examples and complete working code, see:
- ConsoleProgram.java - Complete DSL with multiple interpreters
- FreeMonadTest.java - Comprehensive test suite including monad laws and stack safety
The Free monad represents a sophisticated approach to building composable, testable, and maintainable programmes in Java. Whilst it requires understanding of advanced functional programming concepts, it pays dividends in large-scale applications where flexibility and testability are paramount.
The TryMonad:
Typed Error Handling
- How to handle exceptions functionally with Success and Failure cases
- Converting exception-throwing code into composable, safe operations
- Using
recoverandrecoverWithfor graceful error recovery - Building robust parsing and processing pipelines
- When to choose Try vs Either for error handling
Purpose
The Try<T> type in the Higher-Kinded-J library represents a computation that might result in a value of type T (a Success) or fail with a Throwable (a Failure). It serves as a functional alternative to traditional try-catch blocks for handling exceptions, particularly checked exceptions, within a computation chain. We can think of it as an Either where the Left is an Exception, but also using try-catch blocks behind the scene, so that we don’t have to.
Try Type
Monadic Structure
Key benefits include:
- Explicit Error Handling: Makes it clear from the return type (
Try<T>) that a computation might fail. - Composability: Allows chaining operations using methods like
mapandflatMap, where failures are automatically propagated without interrupting the flow with exceptions. - Integration with HKT: Provides HKT simulation (
TryKind) and type class instances (TryMonad) to work seamlessly with generic functional abstractions operating overKind<F, A>. - Error Recovery: Offers methods like
recoverandrecoverWithto handle failures gracefully within the computation chain.
It implements MonadError<TryKind<?>, Throwable>, signifying its monadic nature and its ability to handle errors of type Throwable.
Now that we understand the structure and benefits of Try, let's explore how to create and work with Try instances in practice.
How to Use Try<T>
You can create Try instances in several ways:
-
Try.of(Supplier): Executes aSupplierand wraps the result inSuccessor catches any thrownThrowable(includingErrorand checked exceptions) and wraps it inFailure.import org.higherkindedj.hkt.trymonad.Try; import java.io.FileInputStream; // Success case Try<String> successResult = Try.of(() -> "This will succeed"); // Success("This will succeed") // Failure case (checked exception) Try<FileInputStream> failureResult = Try.of(() -> new FileInputStream("nonexistent.txt")); // Failure(FileNotFoundException) // Failure case (runtime exception) Try<Integer> divisionResult = Try.of(() -> 10 / 0); // Failure(ArithmeticException) -
Try.success(value): Directly creates aSuccessinstance holding the given value (which can be null).Try<String> directSuccess = Try.success("Known value"); Try<String> successNull = Try.success(null); -
Try.failure(throwable): Directly creates aFailureinstance holding the given non-nullThrowable.Try<String> directFailure = Try.failure(new RuntimeException("Something went wrong"));
isSuccess(): Returnstrueif it's aSuccess.isFailure(): Returnstrueif it's aFailure.
Getting the Value (Use with Caution)
get(): Returns the value ifSuccess, otherwise throws the containedThrowable. Avoid using this directly; preferfold,map,flatMap, or recovery methods.
Applies a function to the value inside a Success. If the function throws an exception, the result becomes a Failure. If the original Try was a Failure, map does nothing and returns the original Failure.
Try<Integer> initialSuccess = Try.success(5);
Try<String> mappedSuccess = initialSuccess.map(value -> "Value: " + value); // Success("Value: 5")
Try<Integer> initialFailure = Try.failure(new RuntimeException("Fail"));
Try<String> mappedFailure = initialFailure.map(value -> "Value: " + value); // Failure(RuntimeException)
Try<Integer> mapThrows = initialSuccess.map(value -> { throw new NullPointerException(); }); // Failure(NullPointerException)
Applies a function that returns another Try to the value inside a Success. This is used to sequence operations where each step might fail. Failures are propagated.
Function<Integer, Try<Double>> safeDivide =
value -> (value == 0) ? Try.failure(new ArithmeticException("Div by zero")) : Try.success(10.0 / value);
Try<Integer> inputSuccess = Try.success(2);
Try<Double> result1 = inputSuccess.flatMap(safeDivide); // Success(5.0)
Try<Integer> inputZero = Try.success(0);
Try<Double> result2 = inputZero.flatMap(safeDivide); // Failure(ArithmeticException)
Try<Integer> inputFailure = Try.failure(new RuntimeException("Initial fail"));
Try<Double> result3 = inputFailure.flatMap(safeDivide); // Failure(RuntimeException) - initial failure propagates
Handling Failures (fold, recover, recoverWith)
Safely handles both cases by applying one of two functions.
String message = result2.fold(
successValue -> "Succeeded with " + successValue,
failureThrowable -> "Failed with " + failureThrowable.getMessage()
); // "Failed with Div by zero"
If Failure, applies a function Throwable -> T to produce a new Success value. If the recovery function throws, the result is a Failure containing that new exception.
Function<Throwable, Double> recoverHandler = throwable -> -1.0;
Try<Double> recovered1 = result2.recover(recoverHandler); // Success(-1.0)
Try<Double> recovered2 = result1.recover(recoverHandler); // Stays Success(5.0)
Similar to recover, but the recovery function Throwable -> Try<T> must return a Try. This allows recovery to potentially result in another Failure.
Function<Throwable, Try<Double>> recoverWithHandler = throwable ->
(throwable instanceof ArithmeticException) ? Try.success(Double.POSITIVE_INFINITY) : Try.failure(throwable);
Try<Double> recoveredWith1 = result2.recoverWith(recoverWithHandler); // Success(Infinity)
Try<Double> recoveredWith2 = result3.recoverWith(recoverWithHandler); // Failure(RuntimeException) - re-raised
To use Try with generic code expecting Kind<F, A>:
- Get Instance:
TryMonad tryMonad = TryMonad.INSTANCE; - Wrap(Widen): Use
TRY.widen(myTry)or factories likeTRY.tryOf(() -> ...). - Operate: Use
tryMonad.map(...),tryMonad.flatMap(...),tryMonad.handleErrorWith(...)etc. - Unwrap(Narrow): Use
TRY.narrow(tryKind)to get theTry<T>back.
TryMonad tryMonad = TryMonad.INSTANCE;
Kind<TryKind.Witness, Integer> tryKind1 = TRY.tryOf(() -> 10 / 2); // Success(5) Kind
Kind<TryKind.Witness, Integer> tryKind2 = TRY.tryOf(() -> 10 / 0); // Failure(...) Kind
// Map using Monad instance
Kind<TryKind.Witness, String> mappedKind = tryMonad.map(Object::toString, tryKind1); // Success("5") Kind
// FlatMap using Monad instance
Function<Integer, Kind<TryKind.Witness, Double>> safeDivideKind =
i -> TRY.tryOf(() -> 10.0 / i);
Kind<TryKind.Witness, Double> flatMappedKind = tryMonad.flatMap(safeDivideKind, tryKind1); // Success(2.0) Kind
// Handle error using MonadError instance
Kind<TryKind.Witness, Integer> handledKind = tryMonad.handleErrorWith(
tryKind2, // The Failure Kind
error -> TRY.success(-1) // Recover to Success(-1) Kind
);
// Unwrap
Try<String> mappedTry = TRY.narrow(mappedKind); // Success("5")
Try<Double> flatMappedTry = TRY.narrow(flatMappedKind); // Success(2.0)
Try<Integer> handledTry = TRY.narrow(handledKind); // Success(-1)
System.out.println(mappedTry);
System.out.println(flatMappedTry);
System.out.println(handledTry);
The ValidatedMonad:
Handling Valid or Invalid Operations
- How to distinguish between valid and invalid data with explicit types
- Using Validated as a MonadError for fail-fast error handling
- Understanding when to use monadic operations (fail-fast) vs applicative operations (error accumulation)
- The difference between fail-fast validation (Monad/MonadError) and error-accumulating validation (Applicative with Semigroup)
- Real-world input validation scenarios with detailed error reporting
Purpose
The Validated<E, A> type in Higher-Kinded-J represents a value that can either be Valid<A> (correct) or Invalid<E> (erroneous). It is commonly used in scenarios like input validation where you want to clearly distinguish between a successful result and an error. Unlike types like Either which are often used for general-purpose sum types, Validated is specifically focused on the valid/invalid dichotomy. Operations like map, flatMap, and ap are right-biased, meaning they operate on the Valid value and propagate Invalid values unchanged.
The ValidatedMonad<E> provides a monadic interface for Validated<E, A> (where the error type E is fixed for the monad instance), allowing for functional composition and integration with the Higher-Kinded-J framework. This facilitates chaining operations that can result in either a valid outcome or an error.
- Explicit Validation Outcome: The type signature
Validated<E, A>makes it clear that a computation can result in either a success (Valid<A>) or an error (Invalid<E>). - Functional Composition: Enables chaining of operations using
map,flatMap, andap. If an operation results in anInvalid, subsequent operations in the chain are typically short-circuited, propagating theInvalidstate. - HKT Integration:
ValidatedKind<E, A>(the HKT wrapper forValidated<E, A>) andValidatedMonad<E>allowValidatedto be used with generic functions and type classes that expectKind<F, A>,Functor<F>,Applicative<F>, orMonad<M>. - Clear Error Handling: Provides methods like
fold,ifValid,ifInvalidto handle bothValidandInvalidcases explicitly. - Standardized Error Handling: As a
MonadError<ValidatedKind.Witness<E>, E>, it offersraiseErrorto construct error states andhandleErrorWithfor recovery, integrating with generic error-handling combinators.
ValidatedMonad<E> implements MonadError<ValidatedKind.Witness<E>, E>, which transitively includes Monad<ValidatedKind.Witness<E>>, Applicative<ValidatedKind.Witness<E>>, and Functor<ValidatedKind.Witness<E>>.
Structure
Validated Type Conceptually, Validated<E, A> has two sub-types:
Valid<A>: Contains a valid value of typeA.Invalid<E>: Contains an error value of typeE.
Monadic Structure The ValidatedMonad<E> enables monadic operations on ValidatedKind.Witness<E>.
How to Use ValidatedMonad<E> and Validated<E, A>
Creating Instances
Validated<E, A> instances can be created directly using static factory methods on Validated. For HKT integration, ValidatedKindHelper and ValidatedMonad are used. ValidatedKind<E, A> is the HKT wrapper.
Direct Validated Creation & HKT Helpers: Refer to ValidatedMonadExample.java (Section 1) for runnable examples.
Creates a Valid instance holding a non-null value.
Validated<List<String>, String> validInstance = Validated.valid("Success!"); // Valid("Success!")
Creates an Invalid instance holding a non-null error.
Validated<List<String>, String> invalidInstance = Validated.invalid(Collections.singletonList("Error: Something went wrong.")); // Invalid([Error: Something went wrong.])
Converts a Validated<E, A> to Kind<ValidatedKind.Witness<E>, A> using VALIDATED.widen().
Kind<ValidatedKind.Witness<List<String>>, String> kindValid = VALIDATED.widen(Validated.valid("Wrapped"));
Converts a Kind<ValidatedKind.Witness<E>, A> back to Validated<E, A> using VALIDATED.narrow().
Validated<List<String>, String> narrowedValidated = VALIDATED.narrow(kindValid);
Convenience for widen(Validated.valid(value))using VALIDATED.valid().
Kind<ValidatedKind.Witness<List<String>>, Integer> kindValidInt = VALIDATED.valid(123);
Convenience for widen(Validated.invalid(error)) using VALIDATED.invalid().
Kind<ValidatedKind.Witness<List<String>>, Integer> kindInvalidInt = VALIDATED.invalid(Collections.singletonList("Bad number"));
ValidatedMonad<E> Instance Methods:
Refer to ValidatedMonadExample.java (Sections 1 & 6) for runnable examples.
Lifts a value into ValidatedKind.Witness<E>, creating a Valid(value). This is part of the Applicative interface.
ValidatedMonad<List<String>> validatedMonad = ValidatedMonad.instance();
Kind<ValidatedKind.Witness<List<String>>, String> kindFromMonadOf = validatedMonad.of("Monadic Valid"); // Valid("Monadic Valid")
System.out.println("From monad.of(): " + VALIDATED.narrow(kindFromMonadOf));
Lifts an error E into the ValidatedKind context, creating an Invalid(error). This is part of the MonadError interface.
ValidatedMonad<List<String>> validatedMonad = ValidatedMonad.instance();
List<String> errorPayload = Collections.singletonList("Raised error condition");
Kind<ValidatedKind.Witness<List<String>>, String> raisedError =
validatedMonad.raiseError(errorPayload); // Invalid(["Raised error condition"])
System.out.println("From monad.raiseError(): " + VALIDATED.narrow(raisedError));
Interacting with Validated<E, A> values
The Validated<E, A> interface itself provides useful methods: Refer to ValidatedMonadExample.java (Section 5) for runnable examples of fold, ifValid, ifInvalid.
isValid(): Returnstrueif it's aValid.isInvalid(): Returnstrueif it's anInvalid.get(): Returns the value ifValid, otherwise throwsNoSuchElementException. Use with caution.getError(): Returns the error ifInvalid, otherwise throwsNoSuchElementException. Use with caution.orElse(@NonNull A other): Returns the value ifValid, otherwise returnsother.orElseGet(@NonNull Supplier<? extends @NonNull A> otherSupplier): Returns the value ifValid, otherwise invokesotherSupplier.get().orElseThrow(@NonNull Supplier<? extends X> exceptionSupplier): Returns the value ifValid, otherwise throws the exception from the supplier.ifValid(@NonNull Consumer<? super A> consumer): Performs action ifValid.ifInvalid(@NonNull Consumer<? super E> consumer): Performs action ifInvalid.fold(@NonNull Function<? super E, ? extends T> invalidMapper, @NonNull Function<? super A, ? extends T> validMapper): Applies one of two functions depending on the state.Validatedalso has its ownmap,flatMap, andapmethods that operate directly onValidatedinstances.
Key Operations (via ValidatedMonad<E>)
These operations are performed on the HKT wrapper Kind<ValidatedKind.Witness<E>, A>. Refer to ValidatedMonadExample.java (Sections 2, 3, 4) for runnable examples of map, flatMap, and ap.
Applies f to the value inside kind if it's Valid. If kind is Invalid, or if f throws an exception (The behaviour depends on Validated.map internal error handling, typically an Invalid from Validated.map would be a new Invalid), the result is Invalid.
// From ValidatedMonadExample.java (Section 2)
ValidatedMonad<List<String>> validatedMonad = ValidatedMonad.instance();
Kind<ValidatedKind.Witness<List<String>>, Integer> validKindFromOf = validatedMonad.of(42);
Kind<ValidatedKind.Witness<List<String>>, Integer> invalidIntKind =
VALIDATED.invalid(Collections.singletonList("Initial error for map"));
Function<Integer, String> intToString = i -> "Value: " + i;
Kind<ValidatedKind.Witness<List<String>>, String> mappedValid =
validatedMonad.map(intToString, validKindFromOf); // Valid("Value: 42")
System.out.println("Map (Valid input): " + VALIDATED.narrow(mappedValid));
Kind<ValidatedKind.Witness<List<String>>, String> mappedInvalid =
validatedMonad.map(intToString, invalidIntKind); // Invalid(["Initial error for map"])
System.out.println("Map (Invalid input): " + VALIDATED.narrow(mappedInvalid));
If kind is Valid(a), applies f to a. f must return a Kind<ValidatedKind.Witness<E>, B>. If kind is Invalid, or f returns an Invalid Kind, the result is Invalid.
// From ValidatedMonadExample.java (Section 3)
ValidatedMonad<List<String>> validatedMonad = ValidatedMonad.instance();
Kind<ValidatedKind.Witness<List<String>>, Integer> positiveNumKind = validatedMonad.of(10);
Kind<ValidatedKind.Witness<List<String>>, Integer> nonPositiveNumKind = validatedMonad.of(-5);
Kind<ValidatedKind.Witness<List<String>>, Integer> invalidIntKind =
VALIDATED.invalid(Collections.singletonList("Initial error for flatMap"));
Function<Integer, Kind<ValidatedKind.Witness<List<String>>, String>> intToValidatedStringKind =
i -> {
if (i > 0) {
return VALIDATED.valid("Positive: " + i);
} else {
return VALIDATED.invalid(Collections.singletonList("Number not positive: " + i));
}
};
Kind<ValidatedKind.Witness<List<String>>, String> flatMappedToValid =
validatedMonad.flatMap(intToValidatedStringKind, positiveNumKind); // Valid("Positive: 10")
System.out.println("FlatMap (Valid to Valid): " + VALIDATED.narrow(flatMappedToValid));
Kind<ValidatedKind.Witness<List<String>>, String> flatMappedToInvalid =
validatedMonad.flatMap(intToValidatedStringKind, nonPositiveNumKind); // Invalid(["Number not positive: -5"])
System.out.println("FlatMap (Valid to Invalid): " + VALIDATED.narrow(flatMappedToInvalid));
Kind<ValidatedKind.Witness<List<String>>, String> flatMappedFromInvalid =
validatedMonad.flatMap(intToValidatedStringKind, invalidIntKind); // Invalid(["Initial error for flatMap"])
System.out.println("FlatMap (Invalid input): " + VALIDATED.narrow(flatMappedFromInvalid));
- If
ffisValid(f)andfaisValid(a), appliesftoa, resulting inValid(f(a)). - If either
fforfaisInvalid, the result isInvalid. Specifically, ifffisInvalid, its error is returned. - If
ffisValidbutfaisInvalid, thenfa's error is returned. If both areInvalid,ff's error takes precedence.
Note: This ap behaviour is right-biased and does not accumulate errors in the way some applicative validations might; it propagates the first encountered Invalid or the Invalid function.
// From ValidatedMonadExample.java (Section 4)
ValidatedMonad<List<String>> validatedMonad = ValidatedMonad.instance();
Kind<ValidatedKind.Witness<List<String>>, Function<Integer, String>> validFnKind =
VALIDATED.valid(i -> "Applied: " + (i * 2));
Kind<ValidatedKind.Witness<List<String>>, Function<Integer, String>> invalidFnKind =
VALIDATED.invalid(Collections.singletonList("Function is invalid"));
Kind<ValidatedKind.Witness<List<String>>, Integer> validValueForAp = validatedMonad.of(25);
Kind<ValidatedKind.Witness<List<String>>, Integer> invalidValueForAp =
VALIDATED.invalid(Collections.singletonList("Value is invalid"));
// Valid function, Valid value
Kind<ValidatedKind.Witness<List<String>>, String> apValidFnValidVal =
validatedMonad.ap(validFnKind, validValueForAp); // Valid("Applied: 50")
System.out.println("Ap (ValidFn, ValidVal): " + VALIDATED.narrow(apValidFnValidVal));
// Invalid function, Valid value
Kind<ValidatedKind.Witness<List<String>>, String> apInvalidFnValidVal =
validatedMonad.ap(invalidFnKind, validValueForAp); // Invalid(["Function is invalid"])
System.out.println("Ap (InvalidFn, ValidVal): " + VALIDATED.narrow(apInvalidFnValidVal));
// Valid function, Invalid value
Kind<ValidatedKind.Witness<List<String>>, String> apValidFnInvalidVal =
validatedMonad.ap(validFnKind, invalidValueForAp); // Invalid(["Value is invalid"])
System.out.println("Ap (ValidFn, InvalidVal): " + VALIDATED.narrow(apValidFnInvalidVal));
// Invalid function, Invalid value
Kind<ValidatedKind.Witness<List<String>>, String> apInvalidFnInvalidVal =
validatedMonad.ap(invalidFnKind, invalidValueForAp); // Invalid(["Function is invalid"])
System.out.println("Ap (InvalidFn, InvalidVal): " + VALIDATED.narrow(apInvalidFnInvalidVal));
MonadError Operations
As ValidatedMonad<E> implements MonadError<ValidatedKind.Witness<E>, E>, it provides standardised ways to create and handle errors. Refer to ValidatedMonadExample.java (Section 6) for detailed examples.
// From ValidatedMonadExample.java (Section 6)
ValidatedMonad<List<String>> validatedMonad = ValidatedMonad.instance();
List<String> initialError = Collections.singletonList("Initial Failure");
// 1. Create an Invalid Kind using raiseError
Kind<ValidatedKind.Witness<List<String>>, Integer> invalidKindRaised = // Renamed to avoid conflict
validatedMonad.raiseError(initialError);
System.out.println("Raised error: " + VALIDATED.narrow(invalidKindRaised)); // Invalid([Initial Failure])
// 2. Handle the error: recover to a Valid state
Function<List<String>, Kind<ValidatedKind.Witness<List<String>>, Integer>> recoverToValid =
errors -> {
System.out.println("MonadError: Recovery handler called with errors: " + errors);
return validatedMonad.of(0); // Recover with default value 0
};
Kind<ValidatedKind.Witness<List<String>>, Integer> recoveredValid =
validatedMonad.handleErrorWith(invalidKindRaised, recoverToValid);
System.out.println("Recovered to Valid: " + VALIDATED.narrow(recoveredValid)); // Valid(0)
// 3. Handle the error: transform to another Invalid state
Function<List<String>, Kind<ValidatedKind.Witness<List<String>>, Integer>> transformError =
errors -> validatedMonad.raiseError(Collections.singletonList("Transformed Error: " + errors.get(0)));
Kind<ValidatedKind.Witness<List<String>>, Integer> transformedInvalid =
validatedMonad.handleErrorWith(invalidKindRaised, transformError);
System.out.println("Transformed to Invalid: " + VALIDATED.narrow(transformedInvalid)); // Invalid([Transformed Error: Initial Failure])
// 4. Handle a Valid Kind: handler is not called
Kind<ValidatedKind.Witness<List<String>>, Integer> validKindOriginal = validatedMonad.of(100);
Kind<ValidatedKind.Witness<List<String>>, Integer> notHandled =
validatedMonad.handleErrorWith(validKindOriginal, recoverToValid); // Handler not called
System.out.println("Handling Valid (no change): " + VALIDATED.narrow(notHandled)); // Valid(100)
// 5. Using a default method like handleError
Kind<ValidatedKind.Witness<List<String>>, Integer> errorForHandle = validatedMonad.raiseError(Collections.singletonList("Error for handleError"));
Function<List<String>, Integer> plainValueRecoveryHandler = errors -> -1; // Returns plain value
Kind<ValidatedKind.Witness<List<String>>, Integer> recoveredWithHandle = validatedMonad.handleError(errorForHandle, plainValueRecoveryHandler);
System.out.println("Recovered with handleError: " + VALIDATED.narrow(recoveredWithHandle)); // Valid(-1)
The default recover and recoverWith methods from MonadError are also available.
This example demonstrates how ValidatedMonad along with Validated can be used to chain operations that might succeed or fail. With ValidatedMonad now implementing MonadError, operations like raiseError can be used for clearer error signaling, and handleErrorWith (or other MonadError methods) can be used for more robust recovery strategies within such validation flows.
- ValidatedMonadExample.java see "Combined Validation Scenario".
// Simplified from the ValidatedMonadExample.java
public void combinedValidationScenarioWithMonadError() {
ValidatedMonad<List<String>> validatedMonad = ValidatedMonad.instance();
Kind<ValidatedKind.Witness<List<String>>, String> userInput1 = validatedMonad.of("123");
Kind<ValidatedKind.Witness<List<String>>, String> userInput2 = validatedMonad.of("abc"); // This will lead to an Invalid
Function<String, Kind<ValidatedKind.Witness<List<String>>, Integer>> parseToIntKindMonadError =
(String s) -> {
try {
return validatedMonad.of(Integer.parseInt(s)); // Lifts to Valid
} catch (NumberFormatException e) {
// Using raiseError for semantic clarity
return validatedMonad.raiseError(
Collections.singletonList("'" + s + "' is not a number (via raiseError)."));
}
};
Kind<ValidatedKind.Witness<List<String>>, Integer> parsed1 =
validatedMonad.flatMap(parseToIntKindMonadError, userInput1);
Kind<ValidatedKind.Witness<List<String>>, Integer> parsed2 =
validatedMonad.flatMap(parseToIntKindMonadError, userInput2); // Will be Invalid
System.out.println("Parsed Input 1 (Combined): " + VALIDATED.narrow(parsed1)); // Valid(123)
System.out.println("Parsed Input 2 (Combined): " + VALIDATED.narrow(parsed2)); // Invalid(['abc' is not a number...])
// Example of recovering the parse of userInput2 using handleErrorWith
Kind<ValidatedKind.Witness<List<String>>, Integer> parsed2Recovered =
validatedMonad.handleErrorWith(
parsed2,
errors -> {
System.out.println("Combined scenario recovery: " + errors);
return validatedMonad.of(0); // Default to 0 if parsing failed
});
System.out.println(
"Parsed Input 2 (Recovered to 0): " + VALIDATED.narrow(parsed2Recovered)); // Valid(0)
}
This example demonstrates how ValidatedMonad along with Validated can be used to chain operations that might succeed or fail, propagating errors and allowing for clear handling of either outcome, further enhanced by MonadError capabilities.
The WriterMonad:
Accumulating Output Alongside Computations
- How to accumulate logs or output alongside your main computation
- Understanding the role of Monoid in combining accumulated values
- Building detailed audit trails and debugging information
- Using
tellfor pure logging andlistenfor capturing output - Creating calculations that produce both results and comprehensive logs
Purpose
The Writer monad is a functional pattern designed for computations that, in addition to producing a primary result value, also need to accumulate some secondary output or log along the way. Think of scenarios like:
- Detailed logging of steps within a complex calculation.
- Collecting metrics or events during a process.
- Building up a sequence of results or messages.
A Writer<W, A> represents a computation that produces a main result of type A and simultaneously accumulates an output of type W. The key requirement is that the accumulated type W must form a Monoid.
The Role of Monoid<W>
A Monoid<W> is a type class that defines two things for type W:
empty(): Provides an identity element (like""for String concatenation,0for addition, or an empty list).combine(W w1, W w2): Provides an associative binary operation to combine two values of typeW(like+for strings or numbers, or list concatenation).
The Writer monad uses the Monoid<W> to:
- Provide a starting point (the
emptyvalue) for the accumulation. - Combine the accumulated outputs (
W) from different steps using thecombineoperation when sequencing computations withflatMaporap.
Common examples for W include String (using concatenation), Integer (using addition or multiplication), or List (using concatenation).
Structure
The Writer<W, A> record directly implements WriterKind<W, A>, which in turn extends Kind<WriterKind.Witness<W>, A>.
The Writer<W, A> Type
The core type is the Writer<W, A> record:
// From: org.higherkindedj.hkt.writer.Writer
public record Writer<W, A>(@NonNull W log, @Nullable A value) implements WriterKind<W, A> {
// Static factories
public static <W, A> @NonNull Writer<W, A> create(@NonNull W log, @Nullable A value);
public static <W, A> @NonNull Writer<W, A> value(@NonNull Monoid<W> monoidW, @Nullable A value); // Creates (monoidW.empty(), value)
public static <W> @NonNull Writer<W, Unit> tell(@NonNull W log); // Creates (log, Unit.INSTANCE)
// Instance methods (primarily for direct use, HKT versions via Monad instance)
public <B> @NonNull Writer<W, B> map(@NonNull Function<? super A, ? extends B> f);
public <B> @NonNull Writer<W, B> flatMap(
@NonNull Monoid<W> monoidW, // Monoid needed for combining logs
@NonNull Function<? super A, ? extends Writer<W, ? extends B>> f
);
public @Nullable A run(); // Get the value A, discard log
public @NonNull W exec(); // Get the log W, discard value
}
- It simply holds a pair: the accumulated
log(of typeW) and the computedvalue(of typeA). create(log, value): Basic constructor.value(monoid, value): Creates a Writer with the given value and an empty log according to the providedMonoid.tell(log): Creates a Writer with the given log, andUnit.INSTANCEas it's value, signifying that the operation's primary purpose is the accumulation of the log. Useful for just adding to the log. (Note: The originalWriter.javamight havetell(W log)and infer monoid elsewhere, orWriterMonadhandlestell).map(...): Transforms the computed valueAtoBwhile leaving the logWuntouched.flatMap(...): Sequences computations. It runs the first Writer, uses its valueAto create a second Writer, and combines the logs from both using the providedMonoid.run(): Extracts only the computed valueA, discarding the log.exec(): Extracts only the accumulated logW, discarding the value.
Writer Components
To integrate Writer with Higher-Kinded-J:
WriterKind<W, A>: The HKT interface.Writer<W, A>itself implementsWriterKind<W, A>.WriterKind<W, A>extendsKind<WriterKind.Witness<W>, A>.- It contains a nested
final class Witness<LOG_W> {}which serves as the phantom typeF_WITNESSforWriter<LOG_W, ?>.
- It contains a nested
WriterKindHelper: The utility class with static methods:widen(Writer<W, A>): Converts aWritertoKind<WriterKind.Witness<W>, A>. SinceWriterdirectly implementsWriterKind, this is effectively a checked cast.narrow(Kind<WriterKind.Witness<W>, A>): ConvertsKindback toWriter<W,A>. This is also effectively a checked cast after aninstanceof Writercheck.value(Monoid<W> monoid, A value): Factory method for aKindrepresenting aWriterwith an empty log.tell(W log): Factory method for aKindrepresenting aWriterthat only logs.runWriter(Kind<WriterKind.Witness<W>, A>): Unwraps toWriter<W,A>and returns the record itself.run(Kind<WriterKind.Witness<W>, A>): Executes (unwraps) and returns only the valueA.exec(Kind<WriterKind.Witness<W>, A>): Executes (unwraps) and returns only the logW.
Type Class Instances (WriterFunctor, WriterApplicative, WriterMonad)
These classes provide the standard functional operations for Kind<WriterKind.Witness<W>, A>, allowing you to treat Writer computations generically. Crucially, WriterApplicative<W> and WriterMonad<W> require a Monoid<W> instance during construction.
WriterFunctor<W>: ImplementsFunctor<WriterKind.Witness<W>>. Providesmap(operates only on the valueA).WriterApplicative<W>: ExtendsWriterFunctor<W>, implementsApplicative<WriterKind.Witness<W>>. Requires aMonoid<W>. Providesof(lifting a value with an empty log) andap(applying a wrapped function to a wrapped value, combining logs).WriterMonad<W>: ExtendsWriterApplicative<W>, implementsMonad<WriterKind.Witness<W>>. Requires aMonoid<W>. ProvidesflatMapfor sequencing computations, automatically combining logs using theMonoid.
You typically instantiate WriterMonad<W> for the specific log type W and its corresponding Monoid.
1. Choose Your Log Type W and Monoid<W>
Decide what you want to accumulate (e.g., String for logs, List<String> for messages, Integer for counts) and get its Monoid.
class StringMonoid implements Monoid<String> {
@Override public String empty() { return ""; }
@Override public String combine(String x, String y) { return x + y; }
}
Monoid<String> stringMonoid = new StringMonoid();
2. Get the WriterMonad Instance
Instantiate the monad for your chosen log type W, providing its Monoid.
import org.higherkindedj.hkt.writer.WriterMonad;
// Monad instance for computations logging Strings
// F_WITNESS here is WriterKind.Witness<String>
WriterMonad<String> writerMonad = new WriterMonad<>(stringMonoid);
3. Create Writer Computations
Use WriterKindHelper factory methods, providing the Monoid where needed. The result is Kind<WriterKind.Witness<W>, A>.
// Writer with an initial value and empty log
Kind<WriterKind.Witness<String>, Integer> initialValue = WRITER.value(stringMonoid, 5); // Log: "", Value: 5
// Writer that just logs a message (value is Unit.INSTANCE)
Kind<WriterKind.Witness<String>, Unit> logStart = WRITER.tell("Starting calculation; "); // Log: "Starting calculation; ", Value: ()
// A function that performs a calculation and logs its step
Function<Integer, Kind<WriterKind.Witness<String>, Integer>> addAndLog =
x -> {
int result = x + 10;
String logMsg = "Added 10 to " + x + " -> " + result + "; ";
// Create a Writer directly then wrap with helper or use helper factory
return WRITER.widen(Writer.create(logMsg, result));
};
Function<Integer, Kind<WriterKind.Witness<String>, String>> multiplyAndLogToString =
x -> {
int result = x * 2;
String logMsg = "Multiplied " + x + " by 2 -> " + result + "; ";
return WRITER.widen(Writer.create(logMsg, "Final:" + result));
};
4. Compose Computations using map and flatMap
Use the methods on the writerMonad instance. flatMap automatically combines logs using the Monoid.
// Chain the operations:
// Start with a pure value 0 in the Writer context (empty log)
Kind<WriterKind.Witness<String>, Integer> computationStart = writerMonad.of(0);
// 1. Log the start
Kind<WriterKind.Witness<String>, Integer> afterLogStart = writerMonad.flatMap(ignoredUnit -> initialValue, logStart);
Kind<WriterKind.Witness<String>, Integer> step1Value = WRITER.value(stringMonoid, 5); // ("", 5)
Kind<WriterKind.Witness<String>, Unit> step1Log = WRITER.tell("Initial value set to 5; "); // ("Initial value set to 5; ", ())
// Start -> log -> transform value -> log -> transform value ...
Kind<WriterKind.Witness<String>, Integer> calcPart1 = writerMonad.flatMap(
ignored -> addAndLog.apply(5), // Apply addAndLog to 5, after logging "start"
WRITER.tell("Starting with 5; ")
);
// calcPart1: Log: "Starting with 5; Added 10 to 5 -> 15; ", Value: 15
Kind<WriterKind.Witness<String>, String> finalComputation = writerMonad.flatMap(
intermediateValue -> multiplyAndLogToString.apply(intermediateValue),
calcPart1
);
// finalComputation: Log: "Starting with 5; Added 10 to 5 -> 15; Multiplied 15 by 2 -> 30; ", Value: "Final:30"
// Using map: Only transforms the value, log remains unchanged from the input Kind
Kind<WriterKind.Witness<String>, Integer> initialValForMap = value(stringMonoid, 100); // Log: "", Value: 100
Kind<WriterKind.Witness<String>, String> mappedVal = writerMonad.map(
i -> "Value is " + i,
initialValForMap
); // Log: "", Value: "Value is 100"
5. Run the Computation and Extract Results
Use WRITER.runWriter, WRITER.run, or WRITER.exec from WriterKindHelper.
import org.higherkindedj.hkt.writer.Writer;
// Get the final Writer record (log and value)
Writer<String, String> finalResultWriter = runWriter(finalComputation);
String finalLog = finalResultWriter.log();
String finalValue = finalResultWriter.value();
System.out.println("Final Log: " + finalLog);
// Output: Final Log: Starting with 5; Added 10 to 5 -> 15; Multiplied 15 by 2 -> 30;
System.out.println("Final Value: " + finalValue);
// Output: Final Value: Final:30
// Or get only the value or log
String justValue = WRITER.run(finalComputation); // Extracts value from finalResultWriter
String justLog = WRITER.exec(finalComputation); // Extracts log from finalResultWriter
System.out.println("Just Value: " + justValue); // Output: Just Value: Final:30
System.out.println("Just Log: " + justLog); // Output: Just Log: Starting with 5; Added 10 to 5 -> 15; Multiplied 15 by 2 -> 30;
Writer<String, String> mappedResult = WRITER.runWriter(mappedVal);
System.out.println("Mapped Log: " + mappedResult.log()); // Output: Mapped Log
System.out.println("Mapped Value: " + mappedResult.value()); // Output: Mapped Value: Value is 100
The Writer monad (Writer<W, A>, WriterKind.Witness<W>, WriterMonad<W>) in Higher-Kinded-J provides a structured way to perform computations that produce a main value (A) while simultaneously accumulating some output (W, like logs or metrics).
It relies on a Monoid<W> instance to combine the accumulated outputs when sequencing steps with flatMap. This pattern helps separate the core computation logic from the logging/accumulation aspect, leading to cleaner, more composable code.
The Higher-Kinded-J enables these operations to be performed generically using standard type class interfaces, with Writer<W,A> directly implementing WriterKind<W,A>.
The Const Type: Constant Functors with Phantom Types
- Understanding phantom types and how Const ignores its second type parameter
- Using Const for efficient fold implementations and data extraction
- Leveraging Const with bifunctor operations to transform constant values
- Applying Const in lens and traversal patterns for compositional getters
- Real-world use cases in validation, accumulation, and data mining
- How Const relates to Scala's Const and van Laarhoven lenses
The Const type is a constant functor that holds a value of type C whilst treating A as a phantom type parameter—a type that exists only in the type signature but has no runtime representation. This seemingly simple property unlocks powerful patterns for accumulating values, implementing efficient folds, and building compositional getters in the style of van Laarhoven lenses.
New to phantom types? See the Glossary for a detailed explanation with Java-focused examples, or continue reading for practical demonstrations.
What is Const?
A Const<C, A> is a container that holds a single value of type C. The type parameter A is phantom—it influences the type signature for composition and type safety but doesn't correspond to any stored data. This asymmetry is the key to Const's utility.
// Create a Const holding a String, with Integer as the phantom type
Const<String, Integer> stringConst = new Const<>("Hello");
// The constant value is always accessible
String value = stringConst.value(); // "Hello"
// Create a Const holding a count, with Person as the phantom type
Const<Integer, Person> countConst = new Const<>(42);
int count = countConst.value(); // 42
Key Characteristics
- Constant value: Holds a value of type
Cthat can be retrieved viavalue() - Phantom type: The type parameter
Aexists only for type-level composition - Bifunctor instance: Implements
Bifunctor<ConstKind2.Witness>where:first(f, const)transforms the constant valuesecond(g, const)changes only the phantom type, leaving the constant value unchangedbimap(f, g, const)combines both transformations (but onlyfaffects the constant)
Core Components
The Const Type
public record Const<C, A>(C value) {
public <D> Const<D, A> mapFirst(Function<? super C, ? extends D> firstMapper);
public <B> Const<C, B> mapSecond(Function<? super A, ? extends B> secondMapper);
public <D, B> Const<D, B> bimap(
Function<? super C, ? extends D> firstMapper,
Function<? super A, ? extends B> secondMapper);
}
The HKT Bridge for Const
ConstKind2<C, A>: The HKT marker interface extendingKind2<ConstKind2.Witness, C, A>ConstKind2.Witness: The phantom type witness for Const in the Kind2 systemConstKindHelper: Utility providingwiden2andnarrow2for Kind2 conversions
Type Classes for Const
ConstBifunctor: The singleton bifunctor instance implementingBifunctor<ConstKind2.Witness>
The Phantom Type Property
The defining characteristic of Const is that mapping over the second type parameter has no effect on the constant value. This property is enforced both conceptually and at runtime.
import static org.higherkindedj.hkt.constant.ConstKindHelper.CONST;
Bifunctor<ConstKind2.Witness> bifunctor = ConstBifunctor.INSTANCE;
// Start with a Const holding an integer count
Const<Integer, String> original = new Const<>(42);
System.out.println("Original value: " + original.value());
// Output: 42
// Use second() to change the phantom type from String to Double
Kind2<ConstKind2.Witness, Integer, Double> transformed =
bifunctor.second(
s -> s.length() * 2.0, // Function defines phantom type transformation
CONST.widen2(original));
Const<Integer, Double> result = CONST.narrow2(transformed);
System.out.println("After second(): " + result.value());
// Output: 42 (UNCHANGED!)
// The phantom type changed (String -> Double), but the constant value stayed 42
Note: Whilst the mapper function in second() is never applied to actual data (since A is phantom), it is still validated and applied to null for exception propagation. This maintains consistency with bifunctor exception semantics.
Const as a Bifunctor
Const naturally implements the Bifunctor type class, providing three fundamental operations:
1. first() - Transform the Constant Value
The first operation transforms the constant value from type C to type D, leaving the phantom type unchanged.
Const<String, Integer> stringConst = new Const<>("hello");
// Transform the constant value from String to Integer
Kind2<ConstKind2.Witness, Integer, Integer> lengthConst =
bifunctor.first(String::length, CONST.widen2(stringConst));
Const<Integer, Integer> result = CONST.narrow2(lengthConst);
System.out.println(result.value()); // Output: 5
2. second() - Transform Only the Phantom Type
The second operation changes the phantom type from A to B without affecting the constant value.
Const<String, Integer> stringConst = new Const<>("constant");
// Change the phantom type from Integer to Boolean
Kind2<ConstKind2.Witness, String, Boolean> boolConst =
bifunctor.second(i -> i > 10, CONST.widen2(stringConst));
Const<String, Boolean> result = CONST.narrow2(boolConst);
System.out.println(result.value()); // Output: "constant" (unchanged)
3. bimap() - Transform Both Simultaneously
The bimap operation combines both transformations, but remember: only the first function affects the constant value.
Const<String, Integer> original = new Const<>("hello");
Kind2<ConstKind2.Witness, Integer, String> transformed =
bifunctor.bimap(
String::length, // Transforms constant: "hello" -> 5
i -> "Number: " + i, // Phantom type transformation only
CONST.widen2(original));
Const<Integer, String> result = CONST.narrow2(transformed);
System.out.println(result.value()); // Output: 5
Use Case 1: Efficient Fold Implementations
One of the most practical applications of Const is implementing folds that accumulate a single value whilst traversing a data structure. The phantom type represents the "shape" being traversed, whilst the constant value accumulates the result.
// Count elements in a list using Const
List<String> items = List.of("apple", "banana", "cherry", "date");
Const<Integer, String> count = items.stream()
.reduce(
new Const<>(0), // Initial count
(acc, item) -> new Const<Integer, String>(acc.value() + 1), // Increment
(c1, c2) -> new Const<>(c1.value() + c2.value())); // Combine
System.out.println("Count: " + count.value());
// Output: 4
// Accumulate total length of all strings
Const<Integer, String> totalLength = items.stream()
.reduce(
new Const<>(0),
(acc, item) -> new Const<Integer, String>(acc.value() + item.length()),
(c1, c2) -> new Const<>(c1.value() + c2.value()));
System.out.println("Total length: " + totalLength.value());
// Output: 23
In this pattern, the phantom type (String) represents the type of elements we're folding over, whilst the constant value (Integer) accumulates the result. This mirrors the implementation of folds in libraries like Cats and Scalaz in Scala.
Use Case 2: Getters and Van Laarhoven Lenses
Const is fundamental to the lens pattern pioneered by Edward Kmett and popularised in Scala libraries like Monocle. A lens is an abstraction for focusing on a part of a data structure, and Const enables the "getter" half of this abstraction.
The Getter Pattern
A getter extracts a field from a structure without transforming it. Using Const, we represent this as a function that produces a Const where the phantom type tracks the source structure.
record Person(String name, int age, String city) {}
record Company(String name, Person ceo) {}
Person alice = new Person("Alice", 30, "London");
Company acmeCorp = new Company("ACME Corp", alice);
// Define a getter using Const
Function<Person, Const<String, Person>> nameGetter =
person -> new Const<>(person.name());
// Extract the name
Const<String, Person> nameConst = nameGetter.apply(alice);
System.out.println("CEO name: " + nameConst.value());
// Output: Alice
// Define a getter for the CEO from a Company
Function<Company, Const<Person, Company>> ceoGetter =
company -> new Const<>(company.ceo());
// Compose getters: get CEO name from Company using mapFirst
Function<Company, Const<String, Company>> ceoNameGetter = company ->
ceoGetter.apply(company)
.mapFirst(person -> nameGetter.apply(person).value());
Const<String, Company> result = ceoNameGetter.apply(acmeCorp);
System.out.println("Company CEO name: " + result.value());
// Output: Alice
This pattern is the foundation of van Laarhoven lenses, where Const is used with Functor or Applicative to implement compositional getters. For a deeper dive, see Van Laarhoven Lenses and Scala Monocle.
Use Case 3: Data Extraction from Validation Results
When traversing validation results, you often want to extract accumulated errors or valid data without transforming the individual results. Const provides a clean way to express this pattern.
record ValidationResult(boolean isValid, List<String> errors, Object data) {}
List<ValidationResult> results = List.of(
new ValidationResult(true, List.of(), "Valid data 1"),
new ValidationResult(false, List.of("Error A", "Error B"), null),
new ValidationResult(true, List.of(), "Valid data 2"),
new ValidationResult(false, List.of("Error C"), null)
);
// Extract all errors using Const
List<String> allErrors = new ArrayList<>();
for (ValidationResult result : results) {
// Use Const to extract errors, phantom type represents ValidationResult
Const<List<String>, ValidationResult> errorConst = new Const<>(result.errors());
allErrors.addAll(errorConst.value());
}
System.out.println("All errors: " + allErrors);
// Output: [Error A, Error B, Error C]
// Count valid results
Const<Integer, ValidationResult> validCount = results.stream()
.reduce(
new Const<>(0),
(acc, result) -> new Const<Integer, ValidationResult>(
result.isValid() ? acc.value() + 1 : acc.value()),
(c1, c2) -> new Const<>(c1.value() + c2.value()));
System.out.println("Valid results: " + validCount.value());
// Output: 2
The phantom type maintains the "context" of what we're extracting from (ValidationResult), whilst the constant value accumulates the data we care about (errors or counts).
Const vs Other Types
Understanding how Const relates to similar types clarifies its unique role:
| Type | First Parameter | Second Parameter | Primary Use |
|---|---|---|---|
Const<C, A> | Constant value (stored) | Phantom (not stored) | Folds, getters, extraction |
Tuple2<A, B> | First element (stored) | Second element (stored) | Pairing related values |
Identity<A> | Value (stored) | N/A (single parameter) | Pure computation wrapper |
Either<L, R> | Error (sum type) | Success (sum type) | Error handling |
Use Const when:
- You need to accumulate a single value during traversal
- You're implementing getters or read-only lenses
- You want to extract data without transformation
- The phantom type provides useful type-level information for composition
Use Tuple2 when:
- You need to store and work with both values
- Both parameters represent actual data
Use Identity when:
- You need a minimal monad wrapper with no additional effects
Exception Propagation Note
Although mapSecond doesn't transform the constant value, the mapper function is still applied to null to ensure exception propagation. This maintains consistency with bifunctor semantics.
Const<String, Integer> const_ = new Const<>("value");
// This will throw NullPointerException from the mapper
Const<String, Double> result = const_.mapSecond(i -> {
if (i == null) throw new NullPointerException("Expected non-null");
return i * 2.0;
});
This behaviour ensures that invalid mappers are detected, even though the mapper's result isn't used. For null-safe mappers, simply avoid dereferencing the parameter:
// Null-safe phantom type transformation
Const<String, Double> safe = const_.mapSecond(i -> 3.14);
Summary
- Const<C, A> holds a constant value of type
Cwith a phantom type parameterA - Phantom types exist only in type signatures, enabling type-safe composition without runtime overhead
- Bifunctor operations:
firsttransforms the constant valuesecondchanges only the phantom typebimapcombines both (but only affects the constant via the first function)
- Use cases:
- Efficient fold implementations that accumulate a single value
- Compositional getters in lens and traversal patterns
- Data extraction from complex structures without transformation
- Scala heritage: Mirrors
Constin Cats, Scalaz, and Monocle - External resources:
Understanding Const empowers you to write efficient, compositional code for data extraction and accumulation, leveraging patterns battle-tested in the Scala functional programming ecosystem.
The Transformers:
Combining Monadic Effects

The Problem
When building applications, we often encounter scenarios where we need to combine different computational contexts or effects. For example:
- An operation might be asynchronous (represented by
CompletableFuture). - The same operation might also fail with specific domain errors (represented by
Either<DomainError, Result>). - An operation might need access to a configuration (using
Reader) and also be asynchronous. - A computation might accumulate logs (using
Writer) and also potentially fail (usingMaybeorEither).
Monads Stack Poorly
Directly nesting these monadic types, like CompletableFuture<Either<DomainError, Result>> or Reader<Config, Optional<Data>>, leads to complex, deeply nested code ("callback hell" or nested flatMap/map calls). It becomes difficult to sequence operations and handle errors or contexts uniformly.
For instance, an operation might need to be both asynchronous and handle potential domain-specific errors. Representing this naively leads to nested types like:
// A future that, when completed, yields either a DomainError or a SuccessValue
Kind<CompletableFutureKind.Witness, Either<DomainError, SuccessValue>> nestedResult;
But now, how do we map or flatMap over this stack without lots of boilerplate?
Monad Transformers: A wrapper to simplify nested Monads
Monad Transformers are a design pattern in functional programming used to combine the effects of two different monads into a single, new monad. They provide a standard way to "stack" monadic contexts, allowing you to work with the combined structure more easily using familiar monadic operations like map and flatMap.
A monad transformer T takes a monad M and produces a new monad T<M> that combines the effects of both T (conceptually) and M.
For example:
MaybeT m awraps a monadmand addsMaybe-like failureStateT s m awraps a monadmand adds state-handling capabilityReaderT r m aadds dependency injection (read-only environment)
They allow you to stack monadic behaviours.
Key characteristics:
- Stacking: They allow "stacking" monadic effects in a standard way.
- Unified Interface: The resulting transformed monad (e.g.,
EitherT<CompletableFutureKind, ...>) itself implements theMonad(and oftenMonadError, etc.) interface. - Abstraction: They hide the complexity of manually managing the nested structure. You can use standard
map,flatMap,handleErrorWithoperations on the transformed monad, and it automatically handles the logic for both underlying monads correctly.
Transformers in Higher-Kinded-J
1. EitherT<F, L, R> (Monad Transformer)
- Definition: A monad transformer (
EitherT) that combines an outer monadFwith an innerEither<L, R>. Implemented as a record wrappingKind<F, Either<L, R>>. - Kind Interface:
EitherTKind<F, L, R> - Witness Type
G:EitherTKind.Witness<F, L>(whereFandLare fixed for a given type class instance) - Helper:
EitherTKindHelper(wrap,unwrap). Instances are primarily created viaEitherTstatic factories (fromKind,right,left,fromEither,liftF). - Type Class Instances:
EitherTMonad<F, L>(MonadError<EitherTKind.Witness<F, L>, L>)
- Notes: Simplifies working with nested structures like
F<Either<L, R>>. Requires aMonad<F>instance for the outer monadFpassed to its constructor. ImplementsMonadErrorfor the innerEither'sLefttypeL. See the Order Processing Example Walkthrough for practical usage withCompletableFutureasF. - Usage: How to use the EitherT Monad Transformer
2. MaybeT<F, A> (Monad Transformer)
- Definition: A monad transformer (
MaybeT) that combines an outer monadFwith an innerMaybe<A>. Implemented as a record wrappingKind<F, Maybe<A>>. - Kind Interface:
MaybeTKind<F, A> - Witness Type
G:MaybeTKind.Witness<F>(whereFis fixed for a given type class instance) - Helper:
MaybeTKindHelper(wrap,unwrap). Instances are primarily created viaMaybeTstatic factories (fromKind,just,nothing,fromMaybe,liftF). - Type Class Instances:
MaybeTMonad<F>(MonadError<MaybeTKind.Witness<F>, Void>)
- Notes: Simplifies working with nested structures like
F<Maybe<A>>. Requires aMonad<F>instance for the outer monadF. ImplementsMonadErrorwhere the error type isVoid, corresponding to theNothingstate from the innerMaybe. - Usage: How to use the MaybeT Monad Transformer
3. OptionalT<F, A> (Monad Transformer)
- Definition: A monad transformer (
OptionalT) that combines an outer monadFwith an innerjava.util.Optional<A>. Implemented as a record wrappingKind<F, Optional<A>>. - Kind Interface:
OptionalTKind<F, A> - Witness Type
G:OptionalTKind.Witness<F>(whereFis fixed for a given type class instance) - Helper:
OptionalTKindHelper(wrap,unwrap). Instances are primarily created viaOptionalTstatic factories (fromKind,some,none,fromOptional,liftF). - Type Class Instances:
OptionalTMonad<F>(MonadError<OptionalTKind.Witness<F>, Void>)
- Notes: Simplifies working with nested structures like
F<Optional<A>>. Requires aMonad<F>instance for the outer monadF. ImplementsMonadErrorwhere the error type isVoid, corresponding to theOptional.empty()state from the innerOptional. - Usage: How to use the OptionalT Monad Transformer
4. ReaderT<F, R, A> (Monad Transformer)
- Definition: A monad transformer (
ReaderT) that combines an outer monadFwith an innerReader<R, A>-like behaviour (dependency on environmentR). Implemented as a record wrapping a functionR -> Kind<F, A>. - Kind Interface:
ReaderTKind<F, R, A> - Witness Type
G:ReaderTKind.Witness<F, R>(whereFandRare fixed for a given type class instance) - Helper:
ReaderTKindHelper(wrap,unwrap). Instances are primarily created viaReaderTstatic factories (of,lift,reader,ask). - Type Class Instances:
ReaderTMonad<F, R>(Monad<ReaderTKind<F, R, ?>>)
- Notes: Simplifies managing computations that depend on a read-only environment
Rwhile also involving other monadic effects fromF. Requires aMonad<F>instance for the outer monad. Therun()method ofReaderTtakesRand returnsKind<F, A>. - Usage: How to use the ReaderT Monad Transformer
5. StateT<S, F, A> (Monad Transformer)
- Definition: A monad transformer (
StateT) that adds stateful computation (typeS) to an underlying monadF. It represents a functionS -> Kind<F, StateTuple<S, A>>. - Kind Interface:
StateTKind<S, F, A> - Witness Type
G:StateTKind.Witness<S, F>(whereSfor state andFfor the underlying monad witness are fixed for a given type class instance;Ais the value type parameter) - Helper:
StateTKindHelper(narrow,wrap,runStateT,evalStateT,execStateT,lift). Instances are created viaStateT.create(),StateTMonad.of(), orStateTKind.lift(). - Type Class Instances:
StateTMonad<S, F>(Monad<StateTKind.Witness<S, F>>)
- Notes: Allows combining stateful logic with other monadic effects from
F. Requires aMonad<F>instance for the underlying monad. TherunStateT(initialState)method executes the computation, returningKind<F, StateTuple<S, A>>. - Usage:How to use the StateT Monad Transformer
Further Reading
Start with the Java-focused articles to understand why transformers matter in Java, then explore the General FP theory, and finally examine how other libraries implement these patterns.
Java-Focused Resources
Beginner Level:
- 📚 Monad Transformers in Java: A Practical Guide - John McClean's clear explanation with Cyclops examples (15 min read)
- 🎥 Functional Programming in Java: Beyond Streams - Venkat Subramaniam discusses composition patterns (45 min watch)
- 📄 Combining CompletableFuture with Optional: The Problem - Baeldung's treatment of nested monads (10 min read)
Intermediate Level:
- 📄 Stacking Monads in Functional Java - ATT Israel Engineering team's practical examples (20 min read)
- 📄 Vavr's Approach to Composition - Explore how Vavr handles similar challenges (interactive docs)
Advanced:
- 🔬 Free Monads and Monad Transformers - Rock the JVM's Scala-based but Java-applicable deep dive (30 min read)
General FP Concepts
- 📖 Monad Transformers Step by Step - Martin Grabmüller's classic paper, accessible even for Java developers (PDF, 40 min read)
- 🌐 Monad Transformer - HaskellWiki - Formal definitions with clear examples
- 📖 What is a Monad Transformer? - FP Complete's tutorial with interactive examples
Related Libraries & Comparisons
- 🔗 Cyclops-React Transformers - AOL's comprehensive Java FP library
- 🔗 Vavr Composition Patterns - Alternative approach to the same problems
- 🔗 Arrow-kt Transformers - Kotlin's excellent documentation
- 🔗 Cats MTL - Scala's monad transformer library (advanced)
Community & Discussion
- 💬 Why are Monad Transformers useful? - Stack Overflow discussion with practical examples
- 💬 Monad Transformers in Production - Real-world experiences from Java developers
The EitherT Transformer:
Combining Monadic Effects
- How to combine async operations (CompletableFuture) with typed error handling (Either)
- Building workflows that can fail with specific domain errors while remaining async
- Using
fromKind,fromEither, andliftFto construct EitherT values - Real-world order processing with validation, inventory checks, and payment processing
- Why EitherT eliminates "callback hell" in complex async workflows
EitherT Monad Transformer.
EitherT<F, L, R>: Combining any Monad F with Either<L, R>
The EitherT monad transformer allows you to combine the error-handling capabilities of Either<L, R> with another outer monad F. It transforms a computation that results in Kind<F, Either<L, R>> into a single monadic structure that can be easily composed. This is particularly useful when dealing with operations that can fail (represented by Left<L>) within an effectful context F (like asynchronous operations using CompletableFutureKind or computations involving state with StateKind).
F: The witness type of the outer monad (e.g.,CompletableFutureKind.Witness,OptionalKind.Witness). This monad handles the primary effect (e.g., asynchronicity, optionality).L: The Left type of the innerEither. This typically represents the error type for the computation or alternative result.R: The Right type of the innerEither. This typically represents the success value type.
public record EitherT<F, L, R>(@NonNull Kind<F, Either<L, R>> value) {
/* ... static factories ... */ }
It holds a value of type Kind<F, Either<L, R>>. The real power comes from its associated type class instance, EitherTMonad.
Essentially, EitherT<F, L, R> wraps a value of type Kind<F, Either<L, R>>. It represents a computation within the context F that will eventually yield an Either<L, R>.
The primary goal of EitherT is to provide a unified Monad interface (specifically MonadError for the L type) for this nested structure, hiding the complexity of manually handling both the outer F context and the inner Either context.
EitherTKind<F, L, R>: The Witness Type
Just like other types in the Higher-Kinded-J, EitherT needs a corresponding Kind interface to act as its witness type in generic functions. This is EitherTKind<F, L, R>.
- It extends
Kind<G, R>whereG(the witness for the combined monad) isEitherTKind.Witness<F, L>. FandLare fixed for a specificEitherTcontext, whileRis the variable type parameterAinKind<G, A>.
You'll primarily interact with this type when providing type signatures or receiving results from EitherTMonad methods.
EitherTKindHelper
- Provides widen and narrow methods to safely convert between the concrete
EitherT<F, L, R>and its Kind representation (Kind<EitherTKind<F, L, ?>, R>).
EitherTMonad<F, L>: Operating on EitherT
- The EitherTMonad class implements
MonadError<EitherTKind.Witness<F, L>, L>.
- It requires a Monad
instance for the outer monad F to be provided during construction. This outer monad instance is used internally to handle the effects of F. - It uses
EITHER_T.widenandEITHER_T.narrowinternally to manage the conversion between theKindand the concreteEitherT. - The error type E for MonadError is fixed to L, the 'Left' type of the inner Either. Error handling operations like
raiseError(L l)will create anEitherTrepresentingF<Left(l)>, andhandleErrorWithallows recovering from such Left states.
// Example: F = CompletableFutureKind.Witness, L = DomainError
// 1. Get the MonadError instance for the outer monad F
MonadError<CompletableFutureKind.Witness, Throwable> futureMonad = CompletableFutureMonad.INSTANCE;
// 2. Create the EitherTMonad, providing the outer monad instance
// This EitherTMonad handles DomainError for the inner Either.
MonadError<EitherTKind.Witness<CompletableFutureKind.Witness, DomainError>, DomainError> eitherTMonad =
new EitherTMonad<>(futureMonad);
// Now 'eitherTMonad' can be used to operate on Kind<EitherTKind.Witness<CompletableFutureKind.Witness, DomainError>, A> values.
eitherTMonad.of(value): Lifts a pure valueAinto theEitherTcontext. Result:F<Right(A)>.eitherTMonad.map(f, eitherTKind): Applies functionA -> Bto theRightvalue inside the nested structure, preserving bothFandEithercontexts (if Right). Result:F<Either<L, B>>.eitherTMonad.flatMap(f, eitherTKind): The core sequencing operation. Takes a functionA -> Kind<EitherTKind.Witness<F, L>, B>(i.e.,A -> EitherT<F, L, B>). It unwraps the inputEitherT, handles theFcontext, checks the innerEither:- If
Left(l), it propagatesF<Left(l)>. - If
Right(a), it appliesf(a)to get the nextEitherT<F, L, B>, and extracts its innerKind<F, Either<L, B>>, effectively chaining theFcontexts and theEitherlogic.
- If
eitherTMonad.raiseError(errorL): Creates anEitherTrepresenting a failure in the innerEither. Result:F<Left(L)>.eitherTMonad.handleErrorWith(eitherTKind, handler): Handles a failureLfrom the innerEither. Takes a handlerL -> Kind<EitherTKind.Witness<F, L>, A>. It unwraps the inputEitherT, checks the innerEither:- If
Right(a), propagatesF<Right(a)>. - If
Left(l), applieshandler(l)to get a recoveryEitherT<F, L, A>, and extracts its innerKind<F, Either<L, A>>.
- If
You typically create EitherT instances using its static factory methods, providing the necessary outer Monad<F> instance:
// Assume:
Monad<OptionalKind.Witness> optMonad = OptionalMonad.INSTANCE; // Outer Monad F=Optional
String errorL = "FAILED";
String successR = "OK";
Integer otherR = 123;
// 1. Lifting a pure 'Right' value: Optional<Right(R)>
EitherT<OptionalKind.Witness, String, String> etRight = EitherT.right(optMonad, successR);
// Resulting wrapped value: Optional.of(Either.right("OK"))
// 2. Lifting a pure 'Left' value: Optional<Left(L)>
EitherT<OptionalKind.Witness, String, Integer> etLeft = EitherT.left(optMonad, errorL);
// Resulting wrapped value: Optional.of(Either.left("FAILED"))
// 3. Lifting a plain Either: Optional<Either(input)>
Either<String, String> plainEither = Either.left(errorL);
EitherT<OptionalKind.Witness, String, String> etFromEither = EitherT.fromEither(optMonad, plainEither);
// Resulting wrapped value: Optional.of(Either.left("FAILED"))
// 4. Lifting an outer monad value F<R>: Optional<Right(R)>
Kind<OptionalKind.Witness, Integer> outerOptional = OPTIONAL.widen(Optional.of(otherR));
EitherT<OptionalKind.Witness, String, Integer> etLiftF = EitherT.liftF(optMonad, outerOptional);
// Resulting wrapped value: Optional.of(Either.right(123))
// 5. Wrapping an existing nested Kind: F<Either<L, R>>
Kind<OptionalKind.Witness, Either<String, String>> nestedKind =
OPTIONAL.widen(Optional.of(Either.right(successR)));
EitherT<OptionalKind.Witness, String, String> etFromKind = EitherT.fromKind(nestedKind);
// Resulting wrapped value: Optional.of(Either.right("OK"))
// Accessing the wrapped value:
Kind<OptionalKind.Witness, Either<String, String>> wrappedValue = etRight.value();
Optional<Either<String, String>> unwrappedOptional = OPTIONAL.narrow(wrappedValue);
// unwrappedOptional is Optional.of(Either.right("OK"))
The most common use case for EitherT is combining asynchronous operations (CompletableFuture) with domain error handling (Either). The OrderWorkflowRunner class provides a detailed example.
Here's a simplified conceptual structure based on that example:
public class EitherTExample {
// --- Setup ---
// Assume DomainError is a sealed interface for specific errors
// Re-defining a local DomainError to avoid dependency on the full DomainError hierarchy for this isolated example.
// In a real scenario, you would use the shared DomainError.
record DomainError(String message) {}
record ValidatedData(String data) {}
record ProcessedData(String data) {}
MonadError<CompletableFutureKind.Witness, Throwable> futureMonad = CompletableFutureMonad.INSTANCE;
MonadError<EitherTKind.Witness<CompletableFutureKind.Witness, DomainError>, DomainError> eitherTMonad =
new EitherTMonad<>(futureMonad);
// --- Workflow Steps (returning Kinds) ---
// Simulates a sync validation returning Either
Kind<EitherKind.Witness<DomainError>, ValidatedData> validateSync(String input) {
System.out.println("Validating synchronously...");
if (input.isEmpty()) {
return EITHER.widen(Either.left(new DomainError("Input empty")));
}
return EITHER.widen(Either.right(new ValidatedData("Validated:" + input)));
}
// Simulates an async processing step returning Future<Either>
Kind<CompletableFutureKind.Witness, Either<DomainError, ProcessedData>> processAsync(ValidatedData vd) {
System.out.println("Processing asynchronously for: " + vd.data());
CompletableFuture<Either<DomainError, ProcessedData>> future =
CompletableFuture.supplyAsync(() -> {
try {
Thread.sleep(50);
} catch (InterruptedException e) { /* ignore */ }
if (vd.data().contains("fail")) {
return Either.left(new DomainError("Processing failed"));
}
return Either.right(new ProcessedData("Processed:" + vd.data()));
});
return FUTURE.widen(future);
}
// Function to run the workflow for given input
Kind<CompletableFutureKind.Witness, Either<DomainError, ProcessedData>> runWorkflow(String initialInput) {
// Start with initial data lifted into EitherT
Kind<EitherTKind.Witness<CompletableFutureKind.Witness, DomainError>, String> initialET = eitherTMonad.of(initialInput);
// Step 1: Validate (Sync Either lifted into EitherT)
Kind<EitherTKind.Witness<CompletableFutureKind.Witness, DomainError>, ValidatedData> validatedET =
eitherTMonad.flatMap(
input -> {
// Call sync step returning Kind<EitherKind.Witness,...>
// Correction 1: Use EitherKind.Witness here
Kind<EitherKind.Witness<DomainError>, ValidatedData> validationResult = validateSync(input);
// Lift the Either result into EitherT using fromEither
return EitherT.fromEither(futureMonad, EITHER.narrow(validationResult));
},
initialET
);
// Step 2: Check Inventory (Asynchronous - returns Future<Either<DomainError, Unit>>)
Kind<EitherTKind.Witness<CompletableFutureKind.Witness, DomainError>, WorkflowContext> inventoryET =
eitherTMonad.flatMap( // Chain from validation result
ctx -> { // Executed only if validatedET was F<Right(...)>
// Call async step -> Kind<CompletableFutureKind.Witness, Either<DomainError, Unit>>
Kind<CompletableFutureKind.Witness, Either<DomainError, Unit>> inventoryCheckFutureKind =
steps.checkInventoryAsync(ctx.validatedOrder().productId(), ctx.validatedOrder().quantity());
// Lift the F<Either> directly into EitherT using fromKind
Kind<EitherTKind.Witness<CompletableFutureKind.Witness, DomainError>, Unit> inventoryCheckET =
EitherT.fromKind(inventoryCheckFutureKind);
// If inventory check resolves to Right (now Right(Unit.INSTANCE)), update context.
return eitherTMonad.map(unitInstance -> ctx.withInventoryChecked(), inventoryCheckET);
},
validatedET // Input is result of validation step
);
// Unwrap the final EitherT to get the underlying Future<Either>
return ((EitherT<CompletableFutureKind.Witness, DomainError, ProcessedData>) processedET).value();
}
public void asyncWorkflowErrorHandlingExample(){
// --- Workflow Definition using EitherT ---
// Input data
String inputData = "Data";
String badInputData = "";
String processingFailData = "Data-fail";
// --- Execution ---
System.out.println("--- Running Good Workflow ---");
Kind<CompletableFutureKind.Witness, Either<DomainError, ProcessedData>> resultGoodKind = runWorkflow(inputData);
System.out.println("Good Result: "+FUTURE.join(resultGoodKind));
// Expected: Right(ProcessedData[data=Processed:Validated:Data])
System.out.println("\n--- Running Bad Input Workflow ---");
Kind<CompletableFutureKind.Witness, Either<DomainError, ProcessedData>> resultBadInputKind = runWorkflow(badInputData);
System.out.println("Bad Input Result: "+ FUTURE.join(resultBadInputKind));
// Expected: Left(DomainError[message=Input empty])
System.out.println("\n--- Running Processing Failure Workflow ---");
Kind<CompletableFutureKind.Witness, Either<DomainError, ProcessedData>> resultProcFailKind = runWorkflow(processingFailData);
System.out.println("Processing Fail Result: "+FUTURE.join(resultProcFailKind));
// Expected: Left(DomainError[message=Processing failed])
}
public static void main(String[] args){
EitherTExample example = new EitherTExample();
example.asyncWorkflowErrorHandlingExample();
}
}
This example demonstrates:
- Instantiating
EitherTMonadwith the outerCompletableFutureMonad. - Lifting the initial value using
eitherTMonad.of. - Using
eitherTMonad.flatMapto sequence steps. - Lifting a synchronous
Eitherresult intoEitherTusingEitherT.fromEither. - Lifting an asynchronous
Kind<F, Either<L,R>>result usingEitherT.fromKind. - Automatic short-circuiting: If validation returns
Left, the processing step is skipped. - Unwrapping the final
EitherTusing.value()to get theKind<CompletableFutureKind.Witness, Either<DomainError, ProcessedData>>result.
The primary use is chaining operations using flatMap and handling errors using handleErrorWith or related methods. The OrderWorkflowRunner is the best example. Let's break down a key part:
// --- From OrderWorkflowRunner.java ---
// Assume setup:
// F = CompletableFutureKind<?>
// L = DomainError
// futureMonad = CompletableFutureMonad.INSTANCE;
// eitherTMonad = new EitherTMonad<>(futureMonad);
// steps = new OrderWorkflowSteps(dependencies); // Contains workflow logic
// Initial Context (lifted)
WorkflowContext initialContext = WorkflowContext.start(orderData);
Kind<EitherTKind.Witness<CompletableFutureKind.Witness, DomainError>, WorkflowContext> initialET =
eitherTMonad.of(initialContext); // F<Right(initialContext)>
// Step 1: Validate Order (Synchronous - returns Either)
Kind<EitherTKind.Witness<CompletableFutureKind.Witness, DomainError>, WorkflowContext> validatedET =
eitherTMonad.flatMap( // Use flatMap on EitherTMonad
ctx -> { // Lambda receives WorkflowContext if initialET was Right
// Call sync step -> Either<DomainError, ValidatedOrder>
Either<DomainError, ValidatedOrder> syncResultEither =
EITHER.narrow(steps.validateOrder(ctx.initialData()));
// Lift sync Either into EitherT: -> F<Either<DomainError, ValidatedOrder>>
Kind<EitherTKind.Witness<CompletableFutureKind.Witness, DomainError>, ValidatedOrder>
validatedOrderET = EitherT.fromEither(futureMonad, syncResultEither);
// If validation produced Left, map is skipped.
// If validation produced Right(vo), map updates the context: F<Right(ctx.withValidatedOrder(vo))>
return eitherTMonad.map(ctx::withValidatedOrder, validatedOrderET);
},
initialET // Input to the flatMap
);
// Step 2: Check Inventory (Asynchronous - returns Future<Either<DomainError, Void>>)
Kind<EitherTKind.Witness<CompletableFutureKind.Witness, DomainError>, WorkflowContext> inventoryET =
eitherTMonad.flatMap( // Chain from validation result
ctx -> { // Executed only if validatedET was F<Right(...)>
// Call async step -> Kind<CompletableFutureKind.Witness, Either<DomainError, Void>>
Kind<CompletableFutureKind.Witness, Either<DomainError, Void>> inventoryCheckFutureKind =
steps.checkInventoryAsync(ctx.validatedOrder().productId(), ctx.validatedOrder().quantity());
// Lift the F<Either> directly into EitherT using fromKind
Kind<EitherTKind.Witness<CompletableFutureKind.Witness, DomainError>, Void> inventoryCheckET =
EitherT.fromKind(inventoryCheckFutureKind);
// If inventory check resolves to Right, update context. If Left, map is skipped.
return eitherTMonad.map(ignored -> ctx.withInventoryChecked(), inventoryCheckET);
},
validatedET // Input is result of validation step
);
// Step 4: Create Shipment (Asynchronous with Recovery)
Kind<EitherTKind.Witness<CompletableFutureKind.Witness, DomainError>, WorkflowContext> shipmentET =
eitherTMonad.flatMap( // Chain from previous step
ctx -> {
// Call async shipment step -> F<Either<DomainError, ShipmentInfo>>
Kind<CompletableFutureKind.Witness, Either<DomainError, ShipmentInfo>> shipmentAttemptFutureKind =
steps.createShipmentAsync(ctx.validatedOrder().orderId(), ctx.validatedOrder().shippingAddress());
// Lift into EitherT
Kind<EitherTKind.Witness<CompletableFutureKind.Witness, DomainError>, ShipmentInfo> shipmentAttemptET =
EitherT.fromKind(shipmentAttemptFutureKind);
// *** Error Handling using MonadError ***
Kind<EitherTKind.Witness<CompletableFutureKind.Witness, DomainError>, ShipmentInfo> recoveredShipmentET =
eitherTMonad.handleErrorWith( // Operates on the EitherT value
shipmentAttemptET,
error -> { // Lambda receives DomainError if shipmentAttemptET resolves to Left(error)
if (error instanceof DomainError.ShippingError se && "Temporary Glitch".equals(se.reason())) {
// Specific recoverable error: Return a *successful* EitherT
return eitherTMonad.of(new ShipmentInfo("DEFAULT_SHIPPING_USED"));
} else {
// Non-recoverable error: Re-raise it within EitherT
return eitherTMonad.raiseError(error); // Returns F<Left(error)>
}
});
// Map the potentially recovered result to update context
return eitherTMonad.map(ctx::withShipmentInfo, recoveredShipmentET);
},
paymentET // Assuming paymentET was the previous step
);
// ... rest of workflow ...
// Final unwrap
// EitherT<CompletableFutureKind.Witness, DomainError, FinalResult> finalET = ...;
// Kind<CompletableFutureKind.Witness, Either<DomainError, FinalResult>> finalResultKind = finalET.value();
This demonstrates how EitherTMonad.flatMap sequences the steps, while EitherT.fromEither, EitherT.fromKind, and eitherTMonad.of/raiseError/handleErrorWith manage the lifting and error handling within the combined Future<Either<...>> context.
The Higher-Kinded-J library simplifies the implementation and usage of concepts like monad transformers (e.g., EitherT) in Java precisely because it simulates Higher-Kinded Types (HKTs). Here's how:
-
The Core Problem Without HKTs: Java's type system doesn't allow you to directly parameterize a type by a type constructor like
List,Optional, orCompletableFuture. You can writeList<String>, but you cannot easily write a generic classTransformer<F, A>whereFitself represents any container type (likeList<_>) andAis the value type. This limitation makes defining general monad transformers rather difficult. A monad transformer likeEitherTneeds to combine an arbitrary outer monadFwith the innerEithermonad. Without HKTs, you would typically have to:- Create separate, specific transformers for each outer monad (e.g.,
EitherTOptional,EitherTFuture,EitherTIO). This leads to significant code duplication. - Resort to complex, often unsafe casting or reflection.
- Write extremely verbose code manually handling the nested structure for every combination.
- Create separate, specific transformers for each outer monad (e.g.,
-
How this helps with simulating HKTs):
Higher-Kinded-Jintroduces theKind<F, A>interface. This interface, along with specific "witness types" (likeOptionalKind.Witness,CompletableFutureKind.Witness,EitherKind.Witness<L>), simulates the concept ofF<A>. It allows you to passF(the type constructor, represented by its witness type) as a type parameter, even though Java doesn't support it natively. -
Simplifying Transformer Definition (
EitherT<F, L, R>): Because we can now simulateF<A>usingKind<F, A>, we can define theEitherTdata structure generically:// Simplified from EitherT.java public record EitherT<F, L, R>(@NonNull Kind<F, Either<L, R>> value) implements EitherTKind<F, L, R> { /* ... */ }Here,
Fis a type parameter representing the witness type of the outer monad.EitherTdoesn't need to know which specific monadFis at compile time; it just knows it holds aKind<F, ...>. This makes theEitherTstructure itself general-purpose. -
Simplifying Transformer Operations (
EitherTMonad<F, L>): The real benefit comes with the type class instanceEitherTMonad. This class implementsMonadError<EitherTKind.Witness<F, L>, L>, providing the standard monadic operations (map,flatMap,of,ap,raiseError,handleErrorWith) for the combinedEitherTstructure.Critically,
EitherTMonadtakes theMonad<F>instance for the specific outer monadFas a constructor argument:// From EitherTMonad.java public class EitherTMonad<F, L> implements MonadError<EitherTKind.Witness<F, L>, L> { private final @NonNull Monad<F> outerMonad; // <-- Holds the specific outer monad instance public EitherTMonad(@NonNull Monad<F> outerMonad) { this.outerMonad = Objects.requireNonNull(outerMonad, "Outer Monad instance cannot be null"); } // ... implementation of map, flatMap etc. ... }Inside its
map,flatMap, etc., implementations,EitherTMonaduses the providedouterMonadinstance (via itsmapandflatMapmethods) to handle the outer contextF, while also managing the innerEitherlogic (checking forLeft/Right, applying functions, propagatingLeft). This is where the Higher-Kinded-J drastically simplifies things:
- You only need one
EitherTMonadimplementation. - It works generically for any outer monad
Ffor which you have aMonad<F>instance (likeOptionalMonad,CompletableFutureMonad,IOMonad, etc.). - The complex logic of combining the two monads' behaviours (e.g., how
flatMapshould work onF<Either<L, R>>) is encapsulated withinEitherTMonad, leveraging the simulated HKTs and the providedouterMonadinstance. - As a user, you just instantiate
EitherTMonadwith the appropriate outer monad instance and then use its standard methods (map,flatMap, etc.) on yourEitherTvalues, as seen in theOrderWorkflowRunnerexample. You don't need to manually handle the nesting.
In essence, the HKT simulation provided by Higher-Kinded-J allows defining the structure (EitherT) and the operations (EitherTMonad) generically over the outer monad F, overcoming Java's native limitations and making monad transformers feasible and much less boilerplate-heavy than they would otherwise be.
Further Reading
Start with the Java-focused resources to see practical applications, then explore General FP concepts for deeper understanding, and finally check Related Libraries to see alternative approaches.
Java-Focused Resources
Beginner Level:
- 📚 Error Handling with Either in Java - Baeldung's introduction to Either (10 min read)
- 📄 CompletableFuture Error Handling Patterns - Tomasz Nurkiewicz's comparison to traditional async error handling (15 min read)
- 🎥 Railway Oriented Programming in Java - Scott Wlaschin's classic talk adapted to Java contexts (60 min watch)
Intermediate Level:
- 📄 Combining Async and Error Handling in Java - Real-world async error workflows (20 min read)
- 📄 Vavr's Either vs Java's Optional - When to choose what (15 min read)
Advanced:
- 🔬 Type-Safe Error Handling at Scale - Zalando's production experience (conference talk, 40 min)
General FP Concepts
- 📖 Railway Oriented Programming - F# for Fun and Profit's accessible explanation (20 min read)
- 📖 Handling Errors Without Exceptions - Chapter 4 from "Functional Programming in Scala" (free excerpt)
- 🌐 Either Type - Wikipedia - Formal definition and language comparisons
Related Libraries & Comparisons
- 🔗 Vavr Either Documentation - Mature Java FP library's approach
- 🔗 Arrow Either - Kotlin's excellent API design
- 🔗 Result Type in Rust - See how a systems language solves this problem
Community & Discussion
- 💬 Either vs Exceptions in Java - Stack Overflow debate with practical insights
- 💬 Using Either in Production Java Code - Hacker News discussion with war stories
The OptionalT Transformer:
Combining Monadic Effects with java.util.Optional
- How to integrate Java's Optional with other monadic contexts
- Building async workflows where each step might return empty results
- Using
some,none, andfromOptionalto construct OptionalT values - Creating multi-step data retrieval with graceful failure handling
- Providing default values when optional chains result in empty
OptionalT Monad Transformer
The OptionalT monad transformer (short for Optional Transformer) is designed to combine the semantics of java.util.Optional<A> (representing a value that might be present or absent) with an arbitrary outer monad F. It effectively allows you to work with computations of type Kind<F, Optional<A>> as a single, unified monadic structure.
This is particularly useful when operations within an effectful context F (such as asynchronicity with CompletableFutureKind, non-determinism with ListKind, or dependency injection with ReaderKind) can also result in an absence of a value (represented by Optional.empty()).
Structure
OptionalT<F, A>: The Core Data Type
OptionalT<F, A> is a record that wraps a computation yielding Kind<F, Optional<A>>.
public record OptionalT<F, A>(@NonNull Kind<F, Optional<A>> value)
implements OptionalTKind<F, A> {
// ... static factory methods ...
}
F: The witness type of the outer monad (e.g.,CompletableFutureKind.Witness,ListKind.Witness). This monad encapsulates the primary effect of the computation.A: The type of the value that might be present within the **Optional, which itself is within the context ofF.value: The core wrapped value of type **Kind<F, Optional<A>>. This represents an effectful computationFthat, upon completion, yields ajava.util.Optional<A>.
OptionalTKind<F, A>: The Witness Type
For integration with Higher-Kinded-J's generic programming model, OptionalTKind<F, A> acts as the higher-kinded type witness.
- It extends
Kind<G, A>, whereG(the witness for the combinedOptionalTmonad) isOptionalTKind.Witness<F>. - The outer monad
Fis fixed for a particularOptionalTcontext, whileAis the variable type parameter representing the value inside theOptional.
public interface OptionalTKind<F, A> extends Kind<OptionalTKind.Witness<F>, A> {
// Witness type G = OptionalTKind.Witness<F>
// Value type A = A (from Optional<A>)
}
OptionalTKindHelper: Utility for Wrapping and Unwrapping
OptionalTKindHelper is a final utility class providing static methods to seamlessly convert between the concrete OptionalT<F, A> type and its Kind representation (Kind<OptionalTKind.Witness<F>, A>).
public enum OptionalTKindHelper {
OPTIONAL_T;
// Unwraps Kind<OptionalTKind.Witness<F>, A> to OptionalT<F, A>
public <F, A> @NonNull OptionalT<F, A> narrow(
@Nullable Kind<OptionalTKind.Witness<F>, A> kind);
// Wraps OptionalT<F, A> into OptionalTKind<F, A>
public <F, A> @NonNull OptionalTKind<F, A> widen(
@NonNull OptionalT<F, A> optionalT);
}
Internally, it uses a private record OptionalTHolder to implement OptionalTKind, but this is an implementation detail.
OptionalTMonad<F>: Operating on OptionalT
The OptionalTMonad<F> class implements MonadError<OptionalTKind.Witness<F>, Unit>. This provides the standard monadic operations (of, map, flatMap, ap) and error handling capabilities for the OptionalT structure. The error type E for MonadError is fixed to Unit signifying that an "error" in this context is the Optional.empty() state within F<Optional<A>>.
- It requires a
Monad<F>instance for the outer monadF, which must be supplied during construction. ThisouterMonadis used to manage and sequence the effects ofF.
// Example: F = CompletableFutureKind.Witness
// 1. Get the Monad instance for the outer monad F
Monad<CompletableFutureKind.Witness> futureMonad = CompletableFutureMonad.INSTANCE;
// 2. Create the OptionalTMonad
OptionalTMonad<CompletableFutureKind.Witness> optionalTFutureMonad =
new OptionalTMonad<>(futureMonad);
// Now 'optionalTFutureMonad' can be used to operate on
// Kind<OptionalTKind.Witness<CompletableFutureKind.Witness>, A> values.
optionalTMonad.of(value): Lifts a (nullable) valueAinto theOptionalTcontext. The underlying operation isr -> outerMonad.of(Optional.ofNullable(value)). Result:OptionalT(F<Optional<A>>).optionalTMonad.map(func, optionalTKind): Applies a functionA -> Bto the valueAif it's present within theOptionaland theFcontext is successful. The transformation occurs withinouterMonad.map. Iffuncreturnsnull, the result becomesF<Optional.empty()>. Result:OptionalT(F<Optional<B>>).optionalTMonad.flatMap(func, optionalTKind): The primary sequencing operation. It takes a functionA -> Kind<OptionalTKind.Witness<F>, B>(which effectively meansA -> OptionalT<F, B>). It runs the initialOptionalTto getKind<F, Optional<A>>. UsingouterMonad.flatMap, if this yields anOptional.of(a),funcis applied toato get the nextOptionalT<F, B>. Thevalueof this newOptionalT(Kind<F, Optional<B>>) becomes the result. If at any point anOptional.empty()is encountered withinF, it short-circuits and propagatesF<Optional.empty()>. Result:OptionalT(F<Optional<B>>).optionalTMonad.raiseError(error)(where error isUnit): Creates anOptionalTrepresenting absence. Result:OptionalT(F<Optional.empty()>).optionalTMonad.handleErrorWith(optionalTKind, handler): Handles an empty state from the innerOptional. Takes a handlerFunction<Unit, Kind<OptionalTKind.Witness<F>, A>>.
OptionalT instances are typically created using its static factory methods. These often require a Monad<F> instance for the outer monad.
public void createExample() {
// --- Setup ---
// Outer Monad F = CompletableFutureKind.Witness
Monad<CompletableFutureKind.Witness> futureMonad = CompletableFutureMonad.INSTANCE;
String presentValue = "Data";
Integer numericValue = 123;
// 1. `OptionalT.fromKind(Kind<F, Optional<A>> value)`
// Wraps an existing F<Optional<A>>.
Kind<CompletableFutureKind.Witness, Optional<String>> fOptional =
FUTURE.widen(CompletableFuture.completedFuture(Optional.of(presentValue)));
OptionalT<CompletableFutureKind.Witness, String> ot1 = OptionalT.fromKind(fOptional);
// Value: CompletableFuture<Optional.of("Data")>
// 2. `OptionalT.some(Monad<F> monad, A a)`
// Creates an OptionalT with a present value, F<Optional.of(a)>.
OptionalT<CompletableFutureKind.Witness, String> ot2 = OptionalT.some(futureMonad, presentValue);
// Value: CompletableFuture<Optional.of("Data")>
// 3. `OptionalT.none(Monad<F> monad)`
// Creates an OptionalT representing an absent value, F<Optional.empty()>.
OptionalT<CompletableFutureKind.Witness, String> ot3 = OptionalT.none(futureMonad);
// Value: CompletableFuture<Optional.empty()>
// 4. `OptionalT.fromOptional(Monad<F> monad, Optional<A> optional)`
// Lifts a plain java.util.Optional into OptionalT, F<Optional<A>>.
Optional<Integer> optInt = Optional.of(numericValue);
OptionalT<CompletableFutureKind.Witness, Integer> ot4 = OptionalT.fromOptional(futureMonad, optInt);
// Value: CompletableFuture<Optional.of(123)>
Optional<Integer> optEmpty = Optional.empty();
OptionalT<CompletableFutureKind.Witness, Integer> ot4Empty = OptionalT.fromOptional(futureMonad, optEmpty);
// Value: CompletableFuture<Optional.empty()>
// 5. `OptionalT.liftF(Monad<F> monad, Kind<F, A> fa)`
// Lifts an F<A> into OptionalT. If A is null, it becomes F<Optional.empty()>, otherwise F<Optional.of(A)>.
Kind<CompletableFutureKind.Witness, String> fValue =
FUTURE.widen(CompletableFuture.completedFuture(presentValue));
OptionalT<CompletableFutureKind.Witness, String> ot5 = OptionalT.liftF(futureMonad, fValue);
// Value: CompletableFuture<Optional.of("Data ")>
Kind<CompletableFutureKind.Witness, String> fNullValue =
FUTURE.widen(CompletableFuture.completedFuture(null)); // F<null>
OptionalT<CompletableFutureKind.Witness, String> ot5Null = OptionalT.liftF(futureMonad, fNullValue);
// Value: CompletableFuture<Optional.empty()> (because the value inside F was null)
// Accessing the wrapped value:
Kind<CompletableFutureKind.Witness, Optional<String>> wrappedFVO = ot1.value();
CompletableFuture<Optional<String>> futureOptional = FUTURE.narrow(wrappedFVO);
futureOptional.thenAccept(optStr -> System.out.println("ot1 result: " + optStr));
}
Consider a scenario where you need to fetch a userLogin, then their profile, and finally their preferences. Each step is asynchronous (CompletableFuture) and might return an empty Optional if the data is not found. OptionalT helps manage this composition cleanly.
public static class OptionalTAsyncExample {
// --- Monad Setup ---
static final Monad<CompletableFutureKind.Witness> futureMonad = CompletableFutureMonad.INSTANCE;
static final OptionalTMonad<CompletableFutureKind.Witness> optionalTFutureMonad =
new OptionalTMonad<>(futureMonad);
static final ExecutorService executor = Executors.newFixedThreadPool(2);
public static Kind<CompletableFutureKind.Witness, Optional<User>> fetchUserAsync(String userId) {
return FUTURE.widen(CompletableFuture.supplyAsync(() -> {
System.out.println("Fetching userLogin " + userId + " on " + Thread.currentThread().getName());
try {
TimeUnit.MILLISECONDS.sleep(50);
} catch (InterruptedException e) { /* ignore */ }
return "user1".equals(userId) ? Optional.of(new User(userId, "Alice")) : Optional.empty();
}, executor));
}
public static Kind<CompletableFutureKind.Witness, Optional<UserProfile>> fetchProfileAsync(String userId) {
return FUTURE.widen(CompletableFuture.supplyAsync(() -> {
System.out.println("Fetching profile for " + userId + " on " + Thread.currentThread().getName());
try {
TimeUnit.MILLISECONDS.sleep(50);
} catch (InterruptedException e) { /* ignore */ }
return "user1".equals(userId) ? Optional.of(new UserProfile(userId, "Loves HKJ")) : Optional.empty();
}, executor));
}
public static Kind<CompletableFutureKind.Witness, Optional<UserPreferences>> fetchPrefsAsync(String userId) {
return FUTURE.widen(CompletableFuture.supplyAsync(() -> {
System.out.println("Fetching preferences for " + userId + " on " + Thread.currentThread().getName());
try {
TimeUnit.MILLISECONDS.sleep(50);
} catch (InterruptedException e) { /* ignore */ }
// Simulate preferences sometimes missing even for a valid userLogin
return "user1".equals(userId) && Math.random() > 0.3 ? Optional.of(new UserPreferences(userId, "dark")) : Optional.empty();
}, executor));
}
// --- Service Stubs (simulating async calls returning Future<Optional<T>>) ---
// --- Workflow using OptionalT ---
public static OptionalT<CompletableFutureKind.Witness, UserPreferences> getFullUserPreferences(String userId) {
// Start by fetching the userLogin, lifting into OptionalT
OptionalT<CompletableFutureKind.Witness, User> userOT =
OptionalT.fromKind(fetchUserAsync(userId));
// If userLogin exists, fetch profile
OptionalT<CompletableFutureKind.Witness, UserProfile> profileOT =
OPTIONAL_T.narrow(
optionalTFutureMonad.flatMap(
userLogin -> OPTIONAL_T.widen(OptionalT.fromKind(fetchProfileAsync(userLogin.id()))),
OPTIONAL_T.widen(userOT)
)
);
// If profile exists, fetch preferences
OptionalT<CompletableFutureKind.Witness, UserPreferences> preferencesOT =
OPTIONAL_T.narrow(
optionalTFutureMonad.flatMap(
profile -> OPTIONAL_T.widen(OptionalT.fromKind(fetchPrefsAsync(profile.userId()))),
OPTIONAL_T.widen(profileOT)
)
);
return preferencesOT;
}
// Workflow with recovery / default
public static OptionalT<CompletableFutureKind.Witness, UserPreferences> getPrefsWithDefault(String userId) {
OptionalT<CompletableFutureKind.Witness, UserPreferences> prefsAttemptOT = getFullUserPreferences(userId);
Kind<OptionalTKind.Witness<CompletableFutureKind.Witness>, UserPreferences> recoveredPrefsOTKind =
optionalTFutureMonad.handleErrorWith(
OPTIONAL_T.widen(prefsAttemptOT),
(Unit v) -> { // This lambda is called if prefsAttemptOT results in F<Optional.empty()>
System.out.println("Preferences not found for " + userId + ", providing default.");
// Lift a default preference into OptionalT
UserPreferences defaultPrefs = new UserPreferences(userId, "default-light");
return OPTIONAL_T.widen(OptionalT.some(futureMonad, defaultPrefs));
}
);
return OPTIONAL_T.narrow(recoveredPrefsOTKind);
}
public static void main(String[] args) {
System.out.println("--- Attempting to get preferences for existing userLogin (user1) ---");
OptionalT<CompletableFutureKind.Witness, UserPreferences> resultUser1OT = getFullUserPreferences("user1");
CompletableFuture<Optional<UserPreferences>> future1 =
FUTURE.narrow(resultUser1OT.value());
future1.whenComplete((optPrefs, ex) -> {
if (ex != null) {
System.err.println("Error for user1: " + ex.getMessage());
} else {
System.out.println("User1 Preferences: " + optPrefs.map(UserPreferences::toString).orElse("NOT FOUND"));
}
});
System.out.println("\n--- Attempting to get preferences for non-existing userLogin (user2) ---");
OptionalT<CompletableFutureKind.Witness, UserPreferences> resultUser2OT = getFullUserPreferences("user2");
CompletableFuture<Optional<UserPreferences>> future2 =
FUTURE.narrow(resultUser2OT.value());
future2.whenComplete((optPrefs, ex) -> {
if (ex != null) {
System.err.println("Error for user2: " + ex.getMessage());
} else {
System.out.println("User2 Preferences: " + optPrefs.map(UserPreferences::toString).orElse("NOT FOUND (as expected)"));
}
});
System.out.println("\n--- Attempting to get preferences for user1 WITH DEFAULT ---");
OptionalT<CompletableFutureKind.Witness, UserPreferences> resultUser1WithDefaultOT = getPrefsWithDefault("user1");
CompletableFuture<Optional<UserPreferences>> future3 =
FUTURE.narrow(resultUser1WithDefaultOT.value());
future3.whenComplete((optPrefs, ex) -> {
if (ex != null) {
System.err.println("Error for user1 (with default): " + ex.getMessage());
} else {
// This will either be the fetched prefs or the default.
System.out.println("User1 Preferences (with default): " + optPrefs.map(UserPreferences::toString).orElse("THIS SHOULD NOT HAPPEN if default works"));
}
// Wait for async operations to complete for demonstration
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
executor.shutdown();
});
}
// --- Domain Model ---
record User(String id, String name) {
}
record UserProfile(String userId, String bio) {
}
record UserPreferences(String userId, String theme) {
}
}
This example demonstrates:
- Setting up
OptionalTMonadwithCompletableFutureMonad. - Using
OptionalT.fromKindto lift an existingKind<F, Optional<A>>(the result of async service calls) into theOptionalTcontext. - Sequencing operations with
optionalTFutureMonad.flatMap. If any step in the chain (e.g.,fetchUserAsync) results inF<Optional.empty()>, subsequentflatMaplambdas are short-circuited, and the overall result becomesF<Optional.empty()>. - Using
handleErrorWithto provide a defaultUserPreferencesif the chain of operations results in an emptyOptional. - Finally,
.value()is used to extract the underlyingKind<CompletableFutureKind.Witness, Optional<UserPreferences>>to interact with theCompletableFuturedirectly.
OptionalT simplifies managing sequences of operations where each step might not yield a value.
Further Reading
Start with the Java-focused resources to understand Optional patterns, then explore General FP concepts for deeper understanding, and finally check Related Libraries to see alternative approaches.
Java-Focused Resources
Beginner Level:
- 📚 Java Optional Best Practices - Comprehensive Baeldung guide (20 min read)
- 📄 The Mother of All Bikesheds: Optional.orElse vs orElseGet - Tomasz Nurkiewicz's practical guide (10 min read)
- 🎥 Java Optional - A Practical Guide - Stuart Marks (Oracle) on proper Optional usage (60 min watch)
Intermediate Level:
- 📄 Chaining Optional in Java - flatMap patterns and composition (15 min read)
- 📄 Optional Anti-Patterns - What NOT to do (12 min read)
General FP Concepts
- 📖 Maybe Monad Explained - Haskell's Maybe (Java's Optional equivalent)
- 📖 Null References: The Billion Dollar Mistake - Tony Hoare's historic talk on why Optional matters (10 min read)
Related Libraries & Comparisons
- 🔗 Vavr Option - More functional than Java's Optional
- 🔗 Guava's Optional - Pre-Java 8 approach, still relevant
- 🔗 Kotlin Null Safety - Language-level solution to the same problem
Community & Discussion
- 💬 When to Return Optional vs Throw Exception - Stack Overflow debate
- 💬 Optional Performance Considerations - Aleksey Shipilëv's JVM deep dive
The MaybeT Transformer:
Combining Monadic Effects with Optionality
- How to combine Maybe's optionality with other monadic effects
- Building workflows where operations might produce Nothing within async contexts
- Understanding the difference between MaybeT and OptionalT
- Using
just,nothing, andfromMaybeto construct MaybeT values - Handling Nothing states with Unit as the error type in MonadError
MaybeT<F, A>: Combining Any Monad F with Maybe<A>
The MaybeT monad transformer allows you to combine the optionality of Maybe<A> (representing a value that might be
Just<A> or Nothing) with another outer monad F. It transforms a computation that results in Kind<F, Maybe<A>>
into a single monadic structure. This is useful for operations within an effectful context F (like
CompletableFutureKind for async operations or ListKind for non-deterministic computations) that can also result in
an absence of a value.
F: The witness type of the outer monad (e.g.,CompletableFutureKind.Witness,ListKind.Witness). This monad handles the primary effect (e.g., asynchronicity, non-determinism).A: The type of the value potentially held by the innerMaybe.
// From: org.higherkindedj.hkt.maybe_t.MaybeT
public record MaybeT<F, A>(@NonNull Kind<F, Maybe<A>> value) {
/* ... static factories ... */ }
MaybeT<F, A> wraps a value of type Kind<F, Maybe<A>>. It signifies a computation in the context of F that will
eventually produce a Maybe<A>. The main benefit comes from its associated type class instance, MaybeTMonad, which
provides monadic operations for this combined structure.
MaybeTKind<F, A>: The Witness Type
Similar to other HKTs in Higher-Kinded-J, MaybeT uses MaybeTKind<F, A> as its witness type for use in generic
functions.
- It extends
Kind<G, A>whereG(the witness for the combined monad) isMaybeTKind.Witness<F>. Fis fixed for a specificMaybeTcontext, whileAis the variable type parameter.
public interface MaybeTKind<F, A> extends Kind<MaybeTKind.Witness<F>, A> {
// Witness type G = MaybeTKind.Witness<F>
// Value type A = A (from Maybe<A>)
}
MaybeTKindHelper
- This utility class provides static
wrapandunwrapmethods for safe conversion between the concreteMaybeT<F, A>and itsKindrepresentation (Kind<MaybeTKind.Witness<F>, A>).
// To wrap:
// MaybeT<F, A> maybeT = ...;
Kind<MaybeTKind.Witness<F>, A> kind = MAYBE_T.widen(maybeT);
// To unwrap:
MaybeT<F, A> unwrappedMaybeT = MAYBE_T.narrow(kind);
MaybeTMonad<F>: Operating on MaybeT
The MaybeTMonad<F> class implements MonadError<MaybeTKind.Witness<F>, Unit>. The error type E for MonadError is fixed to Unit, signifying that an "error" in this context is the Maybe.nothing() state within the F<Maybe<A>> structure.
MaybeT represents failure (or absence) as Nothing, which doesn't carry an error value itself.
- It requires a
Monad<F>instance for the outer monadF, provided during construction. This instance is used to manage the effects ofF. - It uses
MaybeTKindHelper.wrapandMaybeTKindHelper.unwrapfor conversions. - Operations like
raiseError(Unit.INSTANCE)will create aMaybeTrepresentingF<Nothing>. TheUnit.INSTANCEsignifies theNothingstate without carrying a separate error value. handleErrorWithallows "recovering" from aNothingstate by providing an alternativeMaybeT. The handler function passed tohandleErrorWithwill receiveUnit.INSTANCEif aNothingstate is encountered.
// Example: F = CompletableFutureKind.Witness, Error type for MonadError is Unit
// 1. Get the Monad instance for the outer monad F
Monad<CompletableFutureKind.Witness> futureMonad = CompletableFutureMonad.INSTANCE;
// 2. Create the MaybeTMonad, providing the outer monad instance
MonadError<MaybeTKind.Witness<CompletableFutureKind.Witness>, Unit> maybeTMonad =
new MaybeTMonad<>(futureMonad);
// Now 'maybeTMonad' can be used to operate on Kind<MaybeTKind.Witness<CompletableFutureKind.Witness>, A> values.
maybeTMonad.of(value): Lifts a nullable valueAinto theMaybeTcontext. Result:F<Maybe.fromNullable(value)>.maybeTMonad.map(f, maybeTKind): Applies functionA -> Bto theJustvalue inside the nested structure. If it'sNothing, orfreturnsnull, it propagatesF<Nothing>.maybeTMonad.flatMap(f, maybeTKind): Sequences operations. TakesA -> Kind<MaybeTKind.Witness<F>, B>. If the input isF<Just(a)>, it appliesf(a)to get the nextMaybeT<F, B>and extracts itsKind<F, Maybe<B>>. IfF<Nothing>, it propagatesF<Nothing>.maybeTMonad.raiseError(Unit.INSTANCE): CreatesMaybeTrepresentingF<Nothing>.maybeTMonad.handleErrorWith(maybeTKind, handler): Handles aNothingstate. The handlerUnit -> Kind<MaybeTKind.Witness<F>, A>is invoked withnull.
MaybeT instances are typically created using its static factory methods, often requiring the outer Monad<F>
instance:
public void createExample() {
Monad<OptionalKind.Witness> optMonad = OptionalMonad.INSTANCE; // Outer Monad F=Optional
String presentValue = "Hello";
// 1. Lifting a non-null value: Optional<Just(value)>
MaybeT<OptionalKind.Witness, String> mtJust = MaybeT.just(optMonad, presentValue);
// Resulting wrapped value: Optional.of(Maybe.just("Hello"))
// 2. Creating a 'Nothing' state: Optional<Nothing>
MaybeT<OptionalKind.Witness, String> mtNothing = MaybeT.nothing(optMonad);
// Resulting wrapped value: Optional.of(Maybe.nothing())
// 3. Lifting a plain Maybe: Optional<Maybe(input)>
Maybe<Integer> plainMaybe = Maybe.just(123);
MaybeT<OptionalKind.Witness, Integer> mtFromMaybe = MaybeT.fromMaybe(optMonad, plainMaybe);
// Resulting wrapped value: Optional.of(Maybe.just(123))
Maybe<Integer> plainNothing = Maybe.nothing();
MaybeT<OptionalKind.Witness, Integer> mtFromMaybeNothing = MaybeT.fromMaybe(optMonad, plainNothing);
// Resulting wrapped value: Optional.of(Maybe.nothing())
// 4. Lifting an outer monad value F<A>: Optional<Maybe<A>> (using fromNullable)
Kind<OptionalKind.Witness, String> outerOptional = OPTIONAL.widen(Optional.of("World"));
MaybeT<OptionalKind.Witness, String> mtLiftF = MaybeT.liftF(optMonad, outerOptional);
// Resulting wrapped value: Optional.of(Maybe.just("World"))
Kind<OptionalKind.Witness, String> outerEmptyOptional = OPTIONAL.widen(Optional.empty());
MaybeT<OptionalKind.Witness, String> mtLiftFEmpty = MaybeT.liftF(optMonad, outerEmptyOptional);
// Resulting wrapped value: Optional.of(Maybe.nothing())
// 5. Wrapping an existing nested Kind: F<Maybe<A>>
Kind<OptionalKind.Witness, Maybe<String>> nestedKind =
OPTIONAL.widen(Optional.of(Maybe.just("Present")));
MaybeT<OptionalKind.Witness, String> mtFromKind = MaybeT.fromKind(nestedKind);
// Resulting wrapped value: Optional.of(Maybe.just("Present"))
// Accessing the wrapped value:
Kind<OptionalKind.Witness, Maybe<String>> wrappedValue = mtJust.value();
Optional<Maybe<String>> unwrappedOptional = OPTIONAL.narrow(wrappedValue);
// unwrappedOptional is Optional.of(Maybe.just("Hello"))
}
Let's consider fetching a userLogin and then their preferences, where each step is asynchronous and might not return a value.
public static class MaybeTAsyncExample {
// --- Setup ---
Monad<CompletableFutureKind.Witness> futureMonad = CompletableFutureMonad.INSTANCE;
MonadError<MaybeTKind.Witness<CompletableFutureKind.Witness>, Unit> maybeTMonad =
new MaybeTMonad<>(futureMonad);
// Simulates fetching a userLogin asynchronously
Kind<CompletableFutureKind.Witness, Maybe<User>> fetchUserAsync(String userId) {
System.out.println("Fetching userLogin: " + userId);
CompletableFuture<Maybe<User>> future = CompletableFuture.supplyAsync(() -> {
try {
TimeUnit.MILLISECONDS.sleep(50);
} catch (InterruptedException e) { /* ignore */ }
if ("user123".equals(userId)) {
return Maybe.just(new User(userId, "Alice"));
}
return Maybe.nothing();
});
return FUTURE.widen(future);
}
// Simulates fetching userLogin preferences asynchronously
Kind<CompletableFutureKind.Witness, Maybe<UserPreferences>> fetchPreferencesAsync(String userId) {
System.out.println("Fetching preferences for userLogin: " + userId);
CompletableFuture<Maybe<UserPreferences>> future = CompletableFuture.supplyAsync(() -> {
try {
TimeUnit.MILLISECONDS.sleep(30);
} catch (InterruptedException e) { /* ignore */ }
if ("user123".equals(userId)) {
return Maybe.just(new UserPreferences(userId, "dark-mode"));
}
return Maybe.nothing(); // No preferences for other users or if userLogin fetch failed
});
return FUTURE.widen(future);
}
// --- Service Stubs (returning Future<Maybe<T>>) ---
// Function to run the workflow for a given userId
Kind<CompletableFutureKind.Witness, Maybe<UserPreferences>> getUserPreferencesWorkflow(String userIdToFetch) {
// Step 1: Fetch User
// Directly use MaybeT.fromKind as fetchUserAsync already returns F<Maybe<User>>
Kind<MaybeTKind.Witness<CompletableFutureKind.Witness>, User> userMT =
MAYBE_T.widen(MaybeT.fromKind(fetchUserAsync(userIdToFetch)));
// Step 2: Fetch Preferences if User was found
Kind<MaybeTKind.Witness<CompletableFutureKind.Witness>, UserPreferences> preferencesMT =
maybeTMonad.flatMap(
userLogin -> { // This lambda is only called if userMT contains F<Just(userLogin)>
System.out.println("User found: " + userLogin.name() + ". Now fetching preferences.");
// fetchPreferencesAsync returns Kind<CompletableFutureKind.Witness, Maybe<UserPreferences>>
// which is F<Maybe<A>>, so we can wrap it directly.
return MAYBE_T.widen(MaybeT.fromKind(fetchPreferencesAsync(userLogin.id())));
},
userMT // Input to flatMap
);
// Try to recover if preferences are Nothing, but userLogin was found (conceptual)
Kind<MaybeTKind.Witness<CompletableFutureKind.Witness>, UserPreferences> preferencesWithDefaultMT =
maybeTMonad.handleErrorWith(preferencesMT, (Unit v) -> { // Handler for Nothing
System.out.println("Preferences not found, attempting to use default.");
// We need userId here. For simplicity, let's assume we could get it or just return nothing.
// This example shows returning nothing again if we can't provide a default.
// A real scenario might try to fetch default preferences or construct one.
return maybeTMonad.raiseError(Unit.INSTANCE); // Still Nothing, or could be MaybeT.just(defaultPrefs)
});
// Unwrap the final MaybeT to get the underlying Future<Maybe<UserPreferences>>
MaybeT<CompletableFutureKind.Witness, UserPreferences> finalMaybeT =
MAYBE_T.narrow(preferencesWithDefaultMT); // or preferencesMT if no recovery
return finalMaybeT.value();
}
public void asyncExample() {
System.out.println("--- Fetching preferences for known userLogin (user123) ---");
Kind<CompletableFutureKind.Witness, Maybe<UserPreferences>> resultKnownUserKind =
getUserPreferencesWorkflow("user123");
Maybe<UserPreferences> resultKnownUser = FUTURE.join(resultKnownUserKind);
System.out.println("Known User Result: " + resultKnownUser);
// Expected: Just(UserPreferences[userId=user123, theme=dark-mode])
System.out.println("\n--- Fetching preferences for unknown userLogin (user999) ---");
Kind<CompletableFutureKind.Witness, Maybe<UserPreferences>> resultUnknownUserKind =
getUserPreferencesWorkflow("user999");
Maybe<UserPreferences> resultUnknownUser = FUTURE.join(resultUnknownUserKind);
System.out.println("Unknown User Result: " + resultUnknownUser);
// Expected: Nothing
}
// --- Workflow Definition using MaybeT ---
// --- Domain Model ---
record User(String id, String name) {
}
record UserPreferences(String userId, String theme) {
}
}
This example illustrates:
- Setting up
MaybeTMonadwithCompletableFutureMonadandUnitas the error type. - Using
MaybeT.fromKindto lift an existingKind<F, Maybe<A>>into theMaybeTcontext. - Sequencing operations with
maybeTMonad.flatMap. IfWorkspaceUserAsyncresults inF<Nothing>, the lambda for fetching preferences is skipped. - The
handleErrorWithshows a way to potentially recover from aNothingstate usingUnitin the handler andraiseError(Unit.INSTANCE). - Finally,
.value()is used to extract the underlyingKind<CompletableFutureKind.Witness, Maybe<UserPreferences>>.
- The
MaybeTtransformer simplifies working with nested optional values within other monadic contexts by providing a unified monadic interface, abstracting away the manual checks and propagation ofNothingstates. - When
MaybeTMonadis used as aMonadError, the error type isUnit, indicating that the "error" (aNothingstate) doesn't carry a specific value beyond its occurrence.
MaybeT vs OptionalT: When to Use Which?
Both MaybeT and OptionalT serve similar purposes—combining optionality with other monadic effects. Here's when to choose each:
Use MaybeT when:
- You're working within the higher-kinded-j ecosystem and want consistency with the
Maybetype - You need a type that's explicitly designed for functional composition (more FP-native)
- You want to avoid Java's
Optionaland its quirks (e.g., serialisation warnings, identity-sensitive operations) - You're building a system where
Maybeis used throughout
Use OptionalT when:
- You're integrating with existing Java code that uses
java.util.Optional - You want to leverage familiar Java 8+ Optional APIs
- Your team is more comfortable with standard Java types
- You're wrapping external libraries that return
Optional
In practice: The choice often comes down to consistency with your existing codebase. Both offer equivalent functionality through their MonadError instances.
Further Reading
Start with the Java-focused resources to understand Maybe/Option patterns, then explore General FP concepts for deeper understanding, and finally check Related Libraries to see alternative approaches.
Java-Focused Resources
Beginner Level:
- 📚 Maybe vs Optional: Understanding the Difference - When to use custom Maybe over Java's Optional (10 min read)
- 📄 Null Handling Patterns in Modern Java - Comprehensive guide to null safety (15 min read)
Intermediate Level:
- 📄 MonadZero and Failure - Understanding failure representation (20 min read)
- 📄 Handling Nothing in Asynchronous Code - DZone's practical patterns (12 min read)
General FP Concepts
- 📖 Maybe/Option Type - Wikipedia's cross-language overview
- 📖 A Fistful of Monads (Haskell) - Accessible introduction to Maybe (30 min read)
Related Libraries & Comparisons
- 🔗 Vavr Option vs Java Optional - Feature comparison
- 🔗 Scala Option - Scala's battle-tested implementation
- 🔗 Arrow Option (Kotlin) - Kotlin FP approach
Community & Discussion
- 💬 Maybe vs Either for Error Handling - Stack Overflow comparison
- 💬 Why Use Maybe When We Have Optional? - Reddit discussion on use cases
The ReaderT Transformer:
Combining Monadic Effects with a Read-Only Environment
- How to combine dependency injection (Reader) with other effects like async operations
- Building configuration-dependent workflows that are also async or failable
- Using
ask,reader, andliftto work with environment-dependent computations - Creating testable microservice clients with injected configuration
- Managing database connections, API keys, and other contextual dependencies
ReaderT Monad Transformer
The ReaderT monad transformer (short for Reader Transformer) allows you to combine the capabilities of the Reader monad (providing a read-only environment R) with another outer monad F. It encapsulates a computation that, given an environment R, produces a result within the monadic context F (i.e., Kind<F, A>).
This is particularly useful when you have operations that require some configuration or context (R) and also involve other effects managed by F, such as asynchronicity (CompletableFutureKind), optionality (OptionalKind, MaybeKind), or error handling (EitherKind).
The ReaderT<F, R, A> structure essentially wraps a function R -> Kind<F, A>.
Structure
ReaderT<F, R, A>: The Core Data Type
ReaderT<F, R, A> is a record that encapsulates the core computation.
public record ReaderT<F, R, A>(@NonNull Function<R, Kind<F, A>> run)
implements ReaderTKind<F, R, A> {
// ... static factory methods ...
}
F: The witness type of the outer monad (e.g.,OptionalKind.Witness,CompletableFutureKind.Witness). This monad handles an effect such as optionality or asynchronicity.R: The type of the read-only environment (context or configuration) that the computation depends on.A: The type of the value produced by the computation, wrapped within the outer monadF.run: The essential functionR -> Kind<F, A>. When this function is applied to an environment of typeR, it yields a monadic valueKind<F, A>.
ReaderTKind<F, R, A>: The Witness Type
To integrate with Higher-Kinded-J's generic programming capabilities, ReaderTKind<F, R, A> serves as the witness type.
- It extends
Kind<G, A>, whereG(the witness for the combinedReaderTmonad) isReaderTKind.Witness<F, R>. - The types
F(outer monad) andR(environment) are fixed for a specificReaderTcontext, whileAis the variable value type.
public interface ReaderTKind<F, R, A> extends Kind<ReaderTKind.Witness<F, R>, A> {
// Witness type G = ReaderTKind.Witness<F, R>
// Value type A = A
}
ReaderTKindHelper: Utility for Wrapping and Unwrapping
ReaderTKindHelper provides READER_T enum essential utility methods to convert between the concrete ReaderT<F, R, A> type and its Kind representation (Kind<ReaderTKind.Witness<F, R>, A>).
public enum ReaderTKindHelper {
READER_T;
// Unwraps Kind<ReaderTKind.Witness<F, R>, A> to ReaderT<F, R, A>
public <F, R, A> @NonNull ReaderT<F, R, A> narrow(
@Nullable Kind<ReaderTKind.Witness<F, R>, A> kind);
// Wraps ReaderT<F, R, A> into ReaderTKind<F, R, A>
public <F, R, A> @NonNull ReaderTKind<F, R, A> widen(
@NonNull ReaderT<F, R, A> readerT);
}
ReaderTMonad<F, R>: Operating on ReaderT
The ReaderTMonad<F, R> class implements the Monad<ReaderTKind.Witness<F, R>> interface, providing the standard monadic operations (of, map, flatMap, ap) for the ReaderT structure.
- It requires a
Monad<F>instance for the outer monadFto be provided during its construction. ThisouterMonadis used internally to sequence operations within theFcontext. Ris the fixed environment type for this monad instance.
// Example: F = OptionalKind.Witness, R = AppConfig
// 1. Get the Monad instance for the outer monad F
OptionalMonad optionalMonad = OptionalMonad.INSTANCE;
// 2. Define your environment type
record AppConfig(String apiKey) {}
// 3. Create the ReaderTMonad
ReaderTMonad<OptionalKind.Witness, AppConfig> readerTOptionalMonad =
new ReaderTMonad<>(optionalMonad);
// Now 'readerTOptionalMonad' can be used to operate on
// Kind<ReaderTKind.Witness<OptionalKind.Witness, AppConfig>, A> values.
readerTMonad.of(value): Lifts a pure valueAinto theReaderTcontext. The underlying function becomesr -> outerMonad.of(value). Result:ReaderT(r -> F<A>).readerTMonad.map(func, readerTKind): Applies a functionA -> Bto the valueAinside theReaderTstructure, if present and successful within theFcontext. The transformationA -> Bhappens within theouterMonad.mapcall. Result:ReaderT(r -> F<B>).readerTMonad.flatMap(func, readerTKind): The core sequencing operation. Takes a functionA -> Kind<ReaderTKind.Witness<F, R>, B>(which is effectivelyA -> ReaderT<F, R, B>). It runs the initialReaderTwith the environmentRto getKind<F, A>. Then, it usesouterMonad.flatMapto process this. IfKind<F, A>yields anA,funcis applied toato get a newReaderT<F, R, B>. This newReaderTis then also run with the same original environmentRto yieldKind<F, B>. This allows composing computations that all depend on the same environmentRwhile also managing the effects ofF. Result:ReaderT(r -> F<B>).
You typically create ReaderT instances using its static factory methods. These methods often require an instance of Monad<F> for the outer monad.
public void createExample(){
// --- Setup ---
// Outer Monad F = OptionalKind.Witness
OptionalMonad optMonad = OptionalMonad.INSTANCE;
// Environment Type R
record Config(String setting) {
}
Config testConfig = new Config("TestValue");
// --- Factory Methods ---
// 1. `ReaderT.of(Function<R, Kind<F, A>> runFunction)`
// Constructs directly from the R -> F<A> function.
Function<Config, Kind<OptionalKind.Witness, String>> runFn1 =
cfg -> OPTIONAL.widen(Optional.of("Data based on " + cfg.setting()));
ReaderT<OptionalKind.Witness, Config, String> rt1 = ReaderT.of(runFn1);
// To run: OPTIONAL.narrow(rt1.run().apply(testConfig)) is Optional.of("Data based on TestValue")
System.out.println(OPTIONAL.narrow(rt1.run().apply(testConfig)));
// 2. `ReaderT.lift(Monad<F> outerMonad, Kind<F, A> fa)`
// Lifts an existing monadic value `Kind<F, A>` into ReaderT.
// The resulting ReaderT ignores the environment R and always returns `fa`.
Kind<OptionalKind.Witness, Integer> optionalValue = OPTIONAL.widen(Optional.of(123));
ReaderT<OptionalKind.Witness, Config, Integer> rt2 = ReaderT.lift(optMonad, optionalValue);
// To run: OPTIONAL.narrow(rt2.run().apply(testConfig)) is Optional.of(123)
System.out.println(OPTIONAL.narrow(rt2.run().apply(testConfig)));
Kind<OptionalKind.Witness, Integer> emptyOptional = OPTIONAL.widen(Optional.empty());
ReaderT<OptionalKind.Witness, Config, Integer> rt2Empty = ReaderT.lift(optMonad, emptyOptional);
// To run: OPTIONAL.narrow(rt2Empty.run().apply(testConfig)) is Optional.empty()
// 3. `ReaderT.reader(Monad<F> outerMonad, Function<R, A> f)`
// Creates a ReaderT from a function R -> A. The result A is then lifted into F using outerMonad.of(A).
Function<Config, String> simpleReaderFn = cfg -> "Hello from " + cfg.setting();
ReaderT<OptionalKind.Witness, Config, String> rt3 = ReaderT.reader(optMonad, simpleReaderFn);
// To run: OPTIONAL.narrow(rt3.run().apply(testConfig)) is Optional.of("Hello from TestValue")
System.out.println(OPTIONAL.narrow(rt3.run().apply(testConfig)));
// 4. `ReaderT.ask(Monad<F> outerMonad)`
// Creates a ReaderT that, when run, provides the environment R itself as the result, lifted into F.
// The function is r -> outerMonad.of(r).
ReaderT<OptionalKind.Witness, Config, Config> rt4 = ReaderT.ask(optMonad);
// To run: OPTIONAL.narrow(rt4.run().apply(testConfig)) is Optional.of(new Config("TestValue"))
System.out.println(OPTIONAL.narrow(rt4.run().apply(testConfig)));
// --- Using ReaderTKindHelper.READER_T to widen/narrow for Monad operations ---
// Avoid a cast with var ReaderTKind<OptionalKind.Witness, Config, String> kindRt1 =
// (ReaderTKind<OptionalKind.Witness, Config, String>) READER_T.widen(rt1);
var kindRt1 = READER_T.widen(rt1);
ReaderT<OptionalKind.Witness, Config, String> unwrappedRt1 = READER_T.narrow(kindRt1);
}
Sometimes, a computation dependent on an environment R and involving an outer monad F might perform an action (e.g., logging, initializing a resource, sending a fire-and-forget message) without producing a specific data value. In such cases, the result type A of ReaderT<F, R, A> can be org.higherkindedj.hkt.Unit.
Let's extend the asynchronous example to include an action that logs a message using the AppConfig and completes asynchronously, returning Unit.
// Action: Log a message using AppConfig, complete asynchronously returning F<Unit>
public static Kind<CompletableFutureKind.Witness, Unit> logInitialisationAsync(AppConfig config) {
CompletableFuture<Unit> future = CompletableFuture.runAsync(() -> {
System.out.println("Thread: " + Thread.currentThread().getName() +
" - Initialising component with API Key: " + config.apiKey() +
" for Service URL: " + config.serviceUrl());
// Simulate some work
try {
TimeUnit.MILLISECONDS.sleep(50);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new RuntimeException(e);
}
System.out.println("Thread: " + Thread.currentThread().getName() +
" - Initialisation complete for: " + config.serviceUrl());
}, config.executor()).thenApply(v -> Unit.INSTANCE); // Ensure CompletableFuture<Unit>
return FUTURE.widen(future);
}
// Wrap the action in ReaderT: R -> F<Unit>
public static ReaderT<CompletableFutureKind.Witness, AppConfig, Unit> initialiseComponentRT() {
return ReaderT.of(ReaderTAsyncUnitExample::logInitialisationAsync);
}
public static void main(String[] args) {
ExecutorService executor = Executors.newFixedThreadPool(2);
AppConfig prodConfig = new AppConfig("prod_secret_for_init", "[https://init.prod.service](https://init.prod.service)", executor);
// Get the ReaderT for the initialisation action
ReaderT<CompletableFutureKind.Witness, AppConfig, Unit> initAction = initialiseComponentRT();
System.out.println("--- Running Initialisation Action with Prod Config ---");
// Run the action by providing the prodConfig environment
// This returns Kind<CompletableFutureKind.Witness, Unit>
Kind<CompletableFutureKind.Witness, Unit> futureUnit = initAction.run().apply(prodConfig);
// Wait for completion and get the Unit result (which is just Unit.INSTANCE)
Unit result = FUTURE.join(futureUnit);
System.out.println("Initialisation Result: " + result); // Expected: Initialisation Result: ()
executor.shutdown();
try {
if (!executor.awaitTermination(5, TimeUnit.SECONDS)) {
executor.shutdownNow();
}
} catch (InterruptedException e) {
executor.shutdownNow();
Thread.currentThread().interrupt();
}
}
This example illustrates:
- An asynchronous action (
logInitialisationAsync) that depends onAppConfigbut logically returns no specific data, so its result isCompletableFuture<Unit>. - This action is wrapped into a
ReaderT<CompletableFutureKind.Witness, AppConfig, Unit>. - When this
ReaderTis run with anAppConfig, it yields aKind<CompletableFutureKind.Witness, Unit>. - The final result of joining such a future is
Unit.INSTANCE, signifying successful completion of the effectful, environment-dependent action.
Let's illustrate ReaderT by combining an environment dependency (AppConfig) with an asynchronous operation (CompletableFuture).
public class ReaderTAsyncExample {
// --- Monad Setup ---
// Outer Monad F = CompletableFutureKind.Witness
static final Monad<CompletableFutureKind.Witness> futureMonad = CompletableFutureMonad.INSTANCE;
// ReaderTMonad for AppConfig and CompletableFutureKind
static final ReaderTMonad<CompletableFutureKind.Witness, AppConfig> cfReaderTMonad =
new ReaderTMonad<>(futureMonad);
// Simulates an async call to an external service
public static Kind<CompletableFutureKind.Witness, ServiceData> fetchExternalData(AppConfig config, String itemId) {
System.out.println("Thread: " + Thread.currentThread().getName() + " - Fetching external data for " + itemId + " using API key: " + config.apiKey() + " from " + config.serviceUrl());
CompletableFuture<ServiceData> future = CompletableFuture.supplyAsync(() -> {
try {
TimeUnit.MILLISECONDS.sleep(100); // Simulate network latency
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new RuntimeException(e);
}
return new ServiceData("Raw data for " + itemId + " from " + config.serviceUrl());
}, config.executor());
return FUTURE.widen(future);
}
// Operation 1: Fetch data, wrapped in ReaderT
// This is R -> F<A> which is the core of ReaderT
public static ReaderT<CompletableFutureKind.Witness, AppConfig, ServiceData> fetchServiceDataRT(String itemId) {
return ReaderT.of(appConfig -> fetchExternalData(appConfig, itemId));
}
// Operation 2: Process data (sync part, depends on AppConfig, then lifts to ReaderT)
// This uses ReaderT.reader: R -> A, then A is lifted to F<A>
public static ReaderT<CompletableFutureKind.Witness, AppConfig, ProcessedData> processDataRT(ServiceData sData) {
return ReaderT.reader(futureMonad, // Outer monad to lift the result
appConfig -> { // Function R -> A (Config -> ProcessedData)
System.out.println("Thread: " + Thread.currentThread().getName() + " - Processing data with config: " + appConfig.apiKey());
return new ProcessedData("Processed: " + sData.rawData().toUpperCase() + " (API Key Suffix: " + appConfig.apiKey().substring(Math.max(0, appConfig.apiKey().length() - 3)) + ")");
});
}
// --- Service Logic (depends on AppConfig, returns Future<ServiceData>) ---
public static void main(String[] args) throws Exception {
ExecutorService executor = Executors.newFixedThreadPool(2);
AppConfig prodConfig = new AppConfig("prod_secret_key_xyz", "https://api.prod.example.com", executor);
AppConfig stagingConfig = new AppConfig("stag_test_key_123", "https://api.staging.example.com", executor);
// --- Composing with ReaderTMonad.flatMap ---
// Define a workflow: fetch data, then process it.
// The AppConfig is threaded through automatically by ReaderT.
Kind<ReaderTKind.Witness<CompletableFutureKind.Witness, AppConfig>, ProcessedData> workflowRTKind =
cfReaderTMonad.flatMap(
serviceData -> READER_T.widen(processDataRT(serviceData)), // ServiceData -> ReaderTKind<..., ProcessedData>
READER_T.widen(fetchServiceDataRT("item123")) // Initial ReaderTKind<..., ServiceData>
);
// Unwrap to the concrete ReaderT to run it
ReaderT<CompletableFutureKind.Witness, AppConfig, ProcessedData> composedWorkflow =
READER_T.narrow(workflowRTKind);
// --- Running the workflow with different configurations ---
System.out.println("--- Running with Production Config ---");
// Run the workflow by providing the 'prodConfig' environment
// This returns Kind<CompletableFutureKind.Witness, ProcessedData>
Kind<CompletableFutureKind.Witness, ProcessedData> futureResultProd = composedWorkflow.run().apply(prodConfig);
ProcessedData resultProd = FUTURE.join(futureResultProd); // Blocks for result
System.out.println("Prod Result: " + resultProd);
// Expected output will show "prod_secret_key_xyz", "[https://api.prod.example.com](https://api.prod.example.com)" in logs
// and "Processed: RAW DATA FOR ITEM123 FROM [https://api.prod.example.com](https://api.prod.example.com) (API Key Suffix: xyz)"
System.out.println("\n--- Running with Staging Config ---");
// Run the same workflow with 'stagingConfig'
Kind<CompletableFutureKind.Witness, ProcessedData> futureResultStaging = composedWorkflow.run().apply(stagingConfig);
ProcessedData resultStaging = FUTURE.join(futureResultStaging); // Blocks for result
System.out.println("Staging Result: " + resultStaging);
// Expected output will show "stag_test_key_123", "[https://api.staging.example.com](https://api.staging.example.com)" in logs
// and "Processed: RAW DATA FOR ITEM123 FROM [https://api.staging.example.com](https://api.staging.example.com) (API Key Suffix: 123)"
// --- Another example: Using ReaderT.ask ---
ReaderT<CompletableFutureKind.Witness, AppConfig, AppConfig> getConfigSettingRT =
ReaderT.ask(futureMonad); // Provides the whole AppConfig
Kind<ReaderTKind.Witness<CompletableFutureKind.Witness, AppConfig>, String> getServiceUrlRT =
cfReaderTMonad.map(
(AppConfig cfg) -> "Service URL from ask: " + cfg.serviceUrl(),
READER_T.widen(getConfigSettingRT)
);
String stagingServiceUrl = FUTURE.join(
READER_T.narrow(getServiceUrlRT).run().apply(stagingConfig)
);
System.out.println("\nStaging Service URL via ask: " + stagingServiceUrl);
executor.shutdown();
try {
if (!executor.awaitTermination(5, TimeUnit.SECONDS)) {
executor.shutdownNow();
}
} catch (InterruptedException e) {
executor.shutdownNow();
Thread.currentThread().interrupt();
}
}
// --- ReaderT-based Service Operations ---
// --- Environment ---
record AppConfig(String apiKey, String serviceUrl, ExecutorService executor) {
}
// --- Service Response ---
record ServiceData(String rawData) {
}
record ProcessedData(String info) {
}
}
This example demonstrates:
- Defining an
AppConfigenvironment. - Creating service operations (
WorkspaceServiceDataRT,processDataRT) that returnReaderT<CompletableFutureKind, AppConfig, A>. These operations implicitly depend onAppConfig. - Using
cfReaderTMonad.flatMapto chain these operations. TheAppConfigis passed implicitly through the chain. - Executing the composed workflow (
composedWorkflow.run().apply(config)) by providing a specificAppConfig. This "injects" the dependency at the very end. - The asynchronicity from
CompletableFutureis handled by thefutureMonadwithinReaderTMonadandReaderT's factories. - Using
ReaderT.askto directly access the configuration within aReaderTcomputation.
ReaderT simplifies managing computations that require a shared, read-only environment while also dealing with other monadic effects, leading to cleaner, more composable, and testable code by deferring environment injection.
Further Reading
Start with the Java-focused resources to understand dependency injection patterns, then explore General FP concepts for deeper understanding, and finally check Related Libraries to see alternative approaches.
Java-Focused Resources
Beginner Level:
- 📚 Dependency Injection the Functional Way - Baeldung's introduction to Reader (15 min read)
- 📄 Reader Monad for Dependency Injection - Practical examples without frameworks (12 min read)
- 🎥 Functional Dependency Injection - Conference talk on Reader pattern (40 min watch)
Intermediate Level:
- 📄 Configuration as Code with Reader - Rock the JVM's practical guide (20 min read)
- 📄 Reader vs Dependency Injection Frameworks - When to use what (15 min read)
Advanced:
- 🔬 ReaderT Design Pattern - FP Complete's production patterns (30 min read)
General FP Concepts
- 📖 Reader Monad Explained - HaskellWiki's clear explanation
- 📖 Environment Passing Style - Wikipedia on the underlying concept
- 📖 Functions as Context - Bartosz Milewski's blog on function contexts
Related Libraries & Comparisons
- 🔗 Cats Reader - Scala's implementation (called Kleisli)
- 🔗 Arrow Reader (Kotlin) - Kotlin FP approach
- 🔗 Haskell's ReaderT - Original inspiration
Community & Discussion
- 💬 Reader Monad vs Constructor Injection - Stack Overflow debate
- 💬 Using Reader in Production - Real-world experiences
- 💬 ReaderT Pattern at Scale - HN discussion from production teams
The StateT Transformer:
Monad Transformer
- How to add stateful computation to any existing monad
- Building stack operations that can fail (StateT with Optional)
- Understanding the relationship between State and StateT<S, Identity, A>
- Creating complex workflows that manage both state and other effects
- Using
get,set,modifyoperations within transformer contexts
The StateT monad transformer is a powerful construct that allows you to add state-management capabilities to an existing monadic context. Think of it as taking the State Monad and making it work on top of another monad, like OptionalKind, EitherKind, or IOKind.
This is incredibly useful when you have computations that are both stateful and involve other effects, such as:
- Potentially missing values (
Optional) - Operations that can fail (
Either,Try) - Side-effecting computations (
IO)
What is StateT?
At its core, a StateT<S, F, A> represents a computation that:
- Takes an initial state of type
S. - Produces a result of type
Aalong with a new state of typeS. - And this entire process of producing the
(newState, value)pair is itself wrapped in an underlying monadic contextF.
So, the fundamental structure of a StateT computation can be thought of as a function:
S -> F<StateTuple<S, A>>
Where:
S: The type of the state.F: The witness type for the underlying monad (e.g.,OptionalKind.Witness,IOKind.Witness).A: The type of the computed value.StateTuple<S, A>: A simple container holding a pair of(state, value).
Key Classes and Concepts
StateT<S, F, A>: The primary data type representing the stateful computation stacked on monadF. It holds the functionS -> Kind<F, StateTuple<S, A>>.StateTKind<S, F, A>: TheKindrepresentation forStateT, allowing it to be used withhigher-kinded-j's typeclasses likeMonad. This is what you'll mostly interact with when usingStateTin a generic monadic context.StateTKind.Witness<S, F>: The higher-kinded type witness forStateT<S, F, _>. Note that both the state typeSand the underlying monad witnessFare part of theStateTwitness.StateTMonad<S, F>: TheMonadinstance forStateT<S, F, _>. It requires aMonadinstance for the underlying monadFto function.StateTKindHelper: A utility class providing static methods for working withStateTKind, such asnarrow(to convertKind<StateTKind.Witness<S, F>, A>back toStateT<S, F, A>),runStateT,evalStateT, andexecStateT.StateTuple<S, A>: A simple record-like class holding the pair(S state, A value).
Motivation: Why Use StateT?
Imagine you're processing a sequence of items, and for each item:
- You need to update some running total (state).
- The processing of an item might fail or return no result (e.g.,
Optional).
Without StateT, you might end up with deeply nested Optional<StateTuple<S, A>> and manually manage both the optionality and the state threading. StateT<S, OptionalKind.Witness, A> elegantly combines these concerns.
Usage
Creating StateT Instances
You typically create StateT instances in a few ways:
-
Directly with
StateT.create(): This is the most fundamental way, providing the state function and the underlying monad instance.// Assume S = Integer (state type), F = OptionalKind.Witness, A = String (value type) OptionalMonad optionalMonad = OptionalMonad.INSTANCE; Function<Integer, Kind<OptionalKind.Witness, StateTuple<Integer, String>>> runFn = currentState -> { if (currentState < 0) { return OPTIONAL.widen(Optional.empty()); } return OPTIONAL.widen(Optional.of(StateTuple.of(currentState + 1, "Value: " + currentState))); }; StateT<Integer, OptionalKind.Witness, String> stateTExplicit = StateT.create(runFn, optionalMonad); Kind<StateTKind.Witness<Integer, OptionalKind.Witness>, String> stateTKind = stateTExplicit; -
Lifting values with
StateTMonad.of(): This lifts a pure valueAinto theStateTcontext. The state remains unchanged, and the underlying monadFwill wrap the result using its ownofmethod.StateTMonad<Integer, OptionalKind.Witness> stateTMonad = StateTMonad.instance(optionalMonad); Kind<StateTKind.Witness<Integer, OptionalKind.Witness>, String> pureStateT = stateTMonad.of("pure value"); Optional<StateTuple<Integer, String>> pureResult = OPTIONAL.narrow(STATE_T.runStateT(pureStateT, 10)); System.out.println("Pure StateT result: " + pureResult); // When run with state 10, this will result in Optional.of(StateTuple(10, "pure value"))
Running StateT Computations
To execute a StateT computation and extract the result, you use methods from StateTKindHelper or directly from the StateT object:
-
runStateT(initialState): Executes the computation with aninitialStateand returns the result wrapped in the underlying monad:Kind<F, StateTuple<S, A>>.// Continuing the stateTKind from above: Kind<OptionalKind.Witness, StateTuple<Integer, String>> resultOptionalTuple = StateTKindHelper.runStateT(stateTKind, 10); Optional<StateTuple<Integer, String>> actualOptional = OPTIONAL.narrow(resultOptionalTuple); if (actualOptional.isPresent()) { StateTuple<Integer, String> tuple = actualOptional.get(); System.out.println("New State (from stateTExplicit): " + tuple.state()); System.out.println("Value (from stateTExplicit): " + tuple.value()); } else { System.out.println("actualOptional was empty for initial state 10"); } // Example with negative initial state (expecting empty Optional) Kind<OptionalKind.Witness, StateTuple<Integer, String>> resultEmptyOptional = StateTKindHelper.runStateT(stateTKind, -5); Optional<StateTuple<Integer, String>> actualEmpty = OPTIONAL.narrow(resultEmptyOptional); // Output: Is empty: true System.out.println("Is empty (for initial state -5): " + actualEmpty.isEmpty()); -
evalStateT(initialState): Executes and gives youKind<F, A>(the value, discarding the final state). -
execStateT(initialState): Executes and gives youKind<F, S>(the final state, discarding the value).
Composing StateT Actions
Like any monad, StateT computations can be composed using map and flatMap.
-
map(Function<A, B> fn): Transforms the valueAtoBwithin theStateTcontext, leaving the state transformation logic and the underlying monadF's effect untouched for that step.Kind<StateTKind.Witness<Integer, OptionalKind.Witness>, Integer> initialComputation = StateT.create(s -> OPTIONAL.widen(Optional.of(StateTuple.of(s + 1, s * 2))), optionalMonad); Kind<StateTKind.Witness<Integer, OptionalKind.Witness>, String> mappedComputation = stateTMonad.map( val -> "Computed: " + val, initialComputation); // Run mappedComputation with initial state 5: // 1. initialComputation runs: state becomes 6, value is 10. Wrapped in Optional. // 2. map's function ("Computed: " + 10) is applied to 10. // Result: Optional.of(StateTuple(6, "Computed: 10")) Optional<StateTuple<Integer, String>> mappedResult = OPTIONAL.narrow(STATE_T.runStateT(mappedComputation, 5)); System.out.print("Mapped result (initial state 5): "); mappedResult.ifPresentOrElse(System.out::println, () -> System.out.println("Empty")); // Output: StateTuple[state=6, value=Computed: 10] -
flatMap(Function<A, Kind<StateTKind.Witness<S, F>, B>> fn): Sequences twoStateTcomputations. The state from the first computation is passed to the second. The effects of the underlying monadFare also sequenced according toF'sflatMap.// stateTMonad and optionalMonad are defined Kind<StateTKind.Witness<Integer, OptionalKind.Witness>, Integer> firstStep = StateT.create(s -> OPTIONAL.widen(Optional.of(StateTuple.of(s + 1, s * 10))), optionalMonad); Function<Integer, Kind<StateTKind.Witness<Integer, OptionalKind.Witness>, String>> secondStepFn = prevValue -> StateT.create( s -> { if (prevValue > 100) { return OPTIONAL.widen(Optional.of(StateTuple.of(s + prevValue, "Large: " + prevValue))); } else { return OPTIONAL.widen(Optional.empty()); } }, optionalMonad); Kind<StateTKind.Witness<Integer, OptionalKind.Witness>, String> combined = stateTMonad.flatMap(secondStepFn, firstStep); // Run with initial state 15 // 1. firstStep(15): state=16, value=150. Wrapped in Optional.of. // 2. secondStepFn(150) is called. It returns a new StateT. // 3. The new StateT is run with state=16: // Its function: s' (which is 16) -> Optional.of(StateTuple(16 + 150, "Large: 150")) // Result: Optional.of(StateTuple(166, "Large: 150")) Optional<StateTuple<Integer, String>> combinedResult = OPTIONAL.narrow(STATE_T.runStateT(combined, 15)); System.out.print("Combined result (initial state 15): "); combinedResult.ifPresentOrElse(System.out::println, () -> System.out.println("Empty")); // Output: StateTuple[state=166, value=Large: 150] // Run with initial state 5 // 1. firstStep(5): state=6, value=50. Wrapped in Optional.of. // 2. secondStepFn(50) is called. // 3. The new StateT is run with state=6: // Its function: s' (which is 6) -> Optional.empty() // Result: Optional.empty() Optional<StateTuple<Integer, String>> combinedEmptyResult = OPTIONAL.narrow(STATE_T.runStateT(combined, 5)); // Output: true System.out.println("Is empty from small initial (state 5 for combined): " + combinedEmptyResult.isEmpty()); -
ap(ff, fa): Applies a wrapped function to a wrapped value.
Note on Null Handling: The
apmethod requires the function it extracts from the firstStateTcomputation to be non-null. If the function isnull, aNullPointerExceptionwill be thrown when the computation is executed. It is the developer's responsibility to ensure that any functions provided within aStateTcontext are non-null. Similarly, the value from the second computation may benull, and the provided function must be able to handle anullinput if that is a valid state.
State-Specific Operations
While higher-kinded-j's StateT provides the core monadic structure, you'll often want common state operations like get, set, modify. These can be constructed using StateT.create or StateTKind.lift.
-
get(): Retrieves the current state as the value.public static <S, F> Kind<StateTKind.Witness<S, F>, S> get(Monad<F> monadF) { Function<S, Kind<F, StateTuple<S, S>>> runFn = s -> monadF.of(StateTuple.of(s, s)); return StateT.create(runFn, monadF); } // Usage: stateTMonad.flatMap(currentState -> ..., get(optionalMonad)) -
set(newState, monadF): Replaces the current state withnewState. The value is oftenVoidorUnit.public static <S, F> Kind<StateTKind.Witness<S, F>, Unit> set(S newState, Monad<F> monadF) { Function<S, Kind<F, StateTuple<S, Void>>> runFn = s -> monadF.of(StateTuple.of(newState, Unit.INSTANCE)); return StateT.create(runFn, monadF); } -
modify(f, monadF): Modifies the state using a function.public static <S, F> Kind<StateTKind.Witness<S, F>, Unit> modify(Function<S, S> f, Monad<F> monadF) { Function<S, Kind<F, StateTuple<S, Unit>>> runFn = s -> monadF.of(StateTuple.of(f.apply(s), Unit.INSTANCE)); return StateT.create(runFn, monadF); } -
gets(f, monadF): Retrieves a value derived from the current state.
public static <S, F, A> Kind<StateTKind.Witness<S, F>, A> gets(Function<S, A> f, Monad<F> monadF) {
Function<S, Kind<F, StateTuple<S, A>>> runFn = s -> monadF.of(StateTuple.of(s, f.apply(s)));
return StateT.create(runFn, monadF);
}
Let's simulate stack operations where the stack is a List<Integer> and operations might be absent if, for example, popping an empty stack.
public class StateTStackExample {
private static final OptionalMonad OPT_MONAD = OptionalMonad.INSTANCE;
private static final StateTMonad<List<Integer>, OptionalKind.Witness> ST_OPT_MONAD =
StateTMonad.instance(OPT_MONAD);
// Helper to lift a state function into StateT<List<Integer>, OptionalKind.Witness, A>
private static <A> Kind<StateTKind.Witness<List<Integer>, OptionalKind.Witness>, A> liftOpt(
Function<List<Integer>, Kind<OptionalKind.Witness, StateTuple<List<Integer>, A>>> f) {
return StateTKindHelper.stateT(f, OPT_MONAD);
}
// push operation
public static Kind<StateTKind.Witness<List<Integer>, OptionalKind.Witness>, Unit> push(Integer value) {
return liftOpt(stack -> {
List<Integer> newStack = new LinkedList<>(stack);
newStack.add(0, value); // Add to front
return OPTIONAL.widen(Optional.of(StateTuple.of(newStack, Unit.INSTANCE)));
});
}
// pop operation
public static Kind<StateTKind.Witness<List<Integer>, OptionalKind.Witness>, Integer> pop() {
return liftOpt(stack -> {
if (stack.isEmpty()) {
return OPTIONAL.widen(Optional.empty()); // Cannot pop from empty stack
}
List<Integer> newStack = new LinkedList<>(stack);
Integer poppedValue = newStack.remove(0);
return OPTIONAL.widen(Optional.of(StateTuple.of(newStack, poppedValue)));
});
}
public static void main(String[] args) {
var computation =
For.from(ST_OPT_MONAD, push(10))
.from(_ -> push(20))
.from(_ -> pop())
.from(_ -> pop()) // t._3() is the first popped value
.yield((a, b, p1, p2) -> {
System.out.println("Popped in order: " + p1 + ", then " + p2);
return p1 + p2;
});
List<Integer> initialStack = Collections.emptyList();
Kind<OptionalKind.Witness, StateTuple<List<Integer>, Integer>> resultWrapped =
StateTKindHelper.runStateT(computation, initialStack);
Optional<StateTuple<List<Integer>, Integer>> resultOpt =
OPTIONAL.narrow(resultWrapped);
resultOpt.ifPresentOrElse(
tuple -> {
System.out.println("Final value: " + tuple.value()); // Expected: 30
System.out.println("Final stack: " + tuple.state()); // Expected: [] (empty)
},
() -> System.out.println("Computation resulted in empty Optional.")
);
// Example of popping an empty stack
Kind<StateTKind.Witness<List<Integer>, OptionalKind.Witness>, Integer> popEmptyStack = pop();
Optional<StateTuple<List<Integer>, Integer>> emptyPopResult =
OPTIONAL.narrow(StateTKindHelper.runStateT(popEmptyStack, Collections.emptyList()));
System.out.println("Popping empty stack was successful: " + emptyPopResult.isPresent()); // false
}
}
Relationship to State Monad
The State Monad (State<S, A>) can be seen as a specialised version of StateT. Specifically, State<S, A> is equivalent to StateT<S, Id, A>, where Id is the Identity monad (a monad that doesn't add any effects, simply Id<A> = A). higher-kinded-j provides an Id monad. State<S, A> can be seen as an equivalent to StateT<S, IdKind.Witness, A>.
Further Reading
- State Monad: Understand the basics of stateful computations.
- Monad Transformers: General concept of monad transformers.
- Documentation for the underlying monads you might use with
StateT, such as:
Using StateT helps write cleaner, more composable code when dealing with computations that involve both state and other monadic effects.
Further Reading
Start with the Java-focused resources to understand state management patterns, then explore General FP concepts for deeper understanding, and finally check Related Libraries to see alternative approaches.
Java-Focused Resources
Beginner Level:
- 📚 State Management Without Mutability - Baeldung's functional state guide (15 min read)
- 📄 Immutable State Transitions in Java - Practical patterns (12 min read)
- 🎥 Functional State Machines - State monad concepts visualised (30 min watch)
Intermediate Level:
- 📄 Threading State Through Computations - Rock the JVM's excellent tutorial (25 min read)
- 📄 Combining State and Failure - StateT with Optional/Either (20 min read)
Advanced:
- 🔬 State Monad for Functional Rendering - John Carmack on functional state (60 min watch)
- 🔬 Implementing State in Pure FP - Gabriel Gonzalez's deep dive (45 min watch)
General FP Concepts
- 📖 State Monad Explained - HaskellWiki's detailed guide
- 📖 The Essence of State - Classic paper by Wadler (PDF, academic but readable)
- 📖 Purely Functional State - Stephen Diehl's tutorial
Related Libraries & Comparisons
- 🔗 Cats State - Scala's mature implementation
- 🔗 Arrow State (Kotlin) - Kotlin's approach
- 🔗 Redux for State Management - JavaScript's popular state library (different paradigm but related)
Community & Discussion
- 💬 When to Use State Monad - Stack Overflow practical advice
- 💬 State Monad vs Mutable State - Reddit discussion on trade-offs
- 💬 StateT in Production Code - HN thread on real-world usage
The Order Workflow Example
This example is a practical demonstration of how to use the Higher-Kinded-J library to manage a common real-world scenario.
The scenario covers an Order workflow that involves asynchronous operations. The Operations can fail with specific, expected business errors.
Async Operations with Error Handling:
You can find the code for the Order Processing example in the org.higherkindedj.example.order package.
Goal of this Example:
- To show how to compose asynchronous steps (using
CompletableFuture) with steps that might result in domain-specific errors (usingEither). - To introduce the
EitherTmonad transformer as a powerful tool to simplify working with nested structures likeCompletableFuture<Either<DomainError, Result>>. - To illustrate how to handle different kinds of errors:
- Domain Errors: Expected business failures (e.g., invalid input, item out of stock) represented by
Either.Left. - System Errors: Unexpected issues during async execution (e.g., network timeouts) handled by
CompletableFuture. - Synchronous Exceptions: Using
Tryto capture exceptions from synchronous code and integrate them into the error handling flow.
- Domain Errors: Expected business failures (e.g., invalid input, item out of stock) represented by
- To demonstrate error recovery using
MonadErrorcapabilities. - To show how dependencies (like logging) can be managed within the workflow steps.
Prerequisites:
Before diving in, it's helpful to have a basic understanding of:
- Core Concepts of Higher-Kinded-J (
Kindand Type Classes). - The specific types being used: Supported Types.
- The general Usage Guide.
Key Files:
Dependencies.java: Holds external dependencies (e.g., logger).OrderWorkflowRunner.java: Orchestrates the workflow, initialising and running different workflow versions (Workflow1 and Workflow2).OrderWorkflowSteps.java: Defines the individual workflow steps (sync/async), acceptingDependencies.Workflow1.java: Implements the order processing workflow usingEitherToverCompletableFuture, with the initial validation step using anEither.Workflow2.java: Implements a similar workflow toWorkflow1, but the initial validation step uses aTrythat is then converted to anEither.WorkflowModels.java: Data records (OrderData,ValidatedOrder, etc.).DomainError.java: Sealed interface defining specific business errors.
Order Processing Workflow
The Problem: Combining Asynchronicity and Typed Errors
Imagine an online order process with the following stages:
- Validate Order Data: Check quantity, product ID, etc. (Can fail with
ValidationError). This is a synchronous operation. - Check Inventory: Call an external inventory service (async). (Can fail with
StockError). - Process Payment: Call a payment gateway (async). (Can fail with
PaymentError). - Create Shipment: Call a shipping service (async). (Can fail with
ShippingError, some of which might be recoverable). - Notify Customer: Send an email/SMS (async). (Might fail, but should not critically fail the entire order).
We face several challenges:
- Asynchronicity: Steps 2, 3, 4, 5 involve network calls and should use
CompletableFuture. - Domain Errors: Steps can fail for specific business reasons. We want to represent these failures with types (like
ValidationError,StockError) rather than just generic exceptions or nulls.Either<DomainError, SuccessValue>is a good fit for this. - Composition: How do we chain these steps together? Directly nesting
CompletableFuture<Either<DomainError, ...>>leads to complex and hard-to-read code (often called "callback hell" or nestedthenCompose/thenApplychains). - Short-Circuiting: If validation fails (returns
Left(ValidationError)), we shouldn't proceed to check inventory or process payment. The workflow should stop and return the validation error. - Dependencies & Logging: Steps need access to external resources (like service clients, configuration, loggers). How do we manage this cleanly?
The Solution: EitherT Monad Transformer + Dependency Injection
This example tackles these challenges using:
Either<DomainError, R>: To represent the result of steps that can fail with a specific business error (DomainError).Leftholds the error,Rightholds the success valueR.CompletableFuture<T>: To handle the asynchronous nature of external service calls. It also inherently handles system-level exceptions (network timeouts, service unavailability) by completing exceptionally with aThrowable.EitherT<F_OUTER_WITNESS, L_ERROR, R_VALUE>: The key component! This monad transformer wraps a nested structureKind<F_OUTER_WITNESS, Either<L_ERROR, R_VALUE>>. In our case:F_OUTER_WITNESS(Outer Monad's Witness) =CompletableFutureKind.Witness(handling async and system errorsThrowable).L_ERROR(Left Type) =DomainError(handling business errors).R_VALUE(Right Type) = The success value of a step. It providesmap,flatMap, andhandleErrorWithoperations that work seamlessly across both the outerCompletableFuturecontext and the innerEithercontext.
- Dependency Injection: A
Dependenciesrecord holds external collaborators (like a logger). This record is passed toOrderWorkflowSteps, making dependencies explicit and testable. - Structured Logging: Steps use the injected logger (
dependencies.log(...)) for consistent logging.
Setting up EitherTMonad
In OrderWorkflowRunner, we get the necessary type class instances:
// MonadError instance for CompletableFuture (handles Throwable)
// F_OUTER_WITNESS for CompletableFuture is CompletableFutureKind.Witness
private final @NonNull MonadError<CompletableFutureKind.Witness, Throwable> futureMonad =
CompletableFutureMonad.INSTANCE;
// EitherTMonad instance, providing the outer monad (futureMonad).
// This instance handles DomainError for the inner Either.
// The HKT witness for EitherT here is EitherTKind.Witness<CompletableFutureKind.Witness, DomainError>
private final @NonNull
MonadError<EitherTKind.Witness<CompletableFutureKind.Witness, DomainError>, DomainError>
eitherTMonad = new EitherTMonad<>(this.futureMonad);
Now, eitherTMonad can be used to chain operations on EitherT values (which are Kind<EitherTKind.Witness<CompletableFutureKind.Witness, DomainError>, A>). Its flatMap method automatically handles:
- Async Sequencing: Delegated to
futureMonad.flatMap(which translates toCompletableFuture::thenCompose). - Error Short-Circuiting: If an inner
EitherbecomesLeft(domainError), subsequentflatMapoperations are skipped, propagating theLeftwithin theCompletableFuture.
Workflow Step-by-Step (Workflow1.java)
Let's trace the execution flow defined in Workflow1. The workflow uses a For comprehension to sequentially chain the steps. steps. The state (WorkflowContext) is carried implicitly within the Right side of the EitherT.
The OrderWorkflowRunner initialises and calls Workflow1 (or Workflow2). The core logic for composing the steps resides within these classes.
We start with OrderData and create an initial WorkflowContext.
Next, eitherTMonad.of(initialContext) lifts this context into an EitherT value, representing a CompletableFuture that is already successfully completed with an Either.Right(initialContext).
// From Workflow1.run()
var initialContext = WorkflowModels.WorkflowContext.start(orderData);
// The For-comprehension expresses the workflow sequentially.
// Each 'from' step represents a monadic bind (flatMap).
var workflow = For.from(eitherTMonad, eitherTMonad.of(initialContext))
// Step 1: Validation. The lambda receives the initial context.
.from(ctx1 -> {
var validatedOrderET = EitherT.fromEither(futureMonad, EITHER.narrow(steps.validateOrder(ctx1.initialData())));
return eitherTMonad.map(ctx1::withValidatedOrder, validatedOrderET);
})
// Step 2: Inventory. The lambda receives a tuple of (initial context, context after validation).
.from(t -> {
var ctx = t._2(); // Get the context from the previous step
var inventoryCheckET = EitherT.fromKind(steps.checkInventoryAsync(ctx.validatedOrder().productId(), ctx.validatedOrder().quantity()));
return eitherTMonad.map(ignored -> ctx.withInventoryChecked(), inventoryCheckET);
})
// Step 3: Payment. The lambda receives a tuple of all previous results. The latest context is the last element.
.from(t -> {
var ctx = t._3(); // Get the context from the previous step
var paymentConfirmET = EitherT.fromKind(steps.processPaymentAsync(ctx.validatedOrder().paymentDetails(), ctx.validatedOrder().amount()));
return eitherTMonad.map(ctx::withPaymentConfirmation, paymentConfirmET);
})
// Step 4: Shipment (with error handling).
.from(t -> {
var ctx = t._4(); // Get the context from the previous step
var shipmentAttemptET = EitherT.fromKind(steps.createShipmentAsync(ctx.validatedOrder().orderId(), ctx.validatedOrder().shippingAddress()));
var recoveredShipmentET = eitherTMonad.handleErrorWith(shipmentAttemptET, error -> {
if (error instanceof DomainError.ShippingError(var reason) && "Temporary Glitch".equals(reason)) {
dependencies.log("WARN: Recovering from temporary shipping glitch for order " + ctx.validatedOrder().orderId());
return eitherTMonad.of(new WorkflowModels.ShipmentInfo("DEFAULT_SHIPPING_USED"));
}
return eitherTMonad.raiseError(error);
});
return eitherTMonad.map(ctx::withShipmentInfo, recoveredShipmentET);
})
// Step 5 & 6 are combined in the yield for a cleaner result.
.yield(t -> {
var finalContext = t._5(); // The context after the last 'from'
var finalResult = new WorkflowModels.FinalResult(
finalContext.validatedOrder().orderId(),
finalContext.paymentConfirmation().transactionId(),
finalContext.shipmentInfo().trackingId()
);
// Attempt notification, but recover from failure, returning the original FinalResult.
var notifyET = EitherT.fromKind(steps.notifyCustomerAsync(finalContext.initialData().customerId(), "Order processed: " + finalResult.orderId()));
var recoveredNotifyET = eitherTMonad.handleError(notifyET, notifyError -> {
dependencies.log("WARN: Notification failed for order " + finalResult.orderId() + ": " + notifyError.message());
return Unit.INSTANCE;
});
// Map the result of the notification back to the FinalResult we want to return.
return eitherTMonad.map(ignored -> finalResult, recoveredNotifyET);
});
// The yield returns a Kind<M, Kind<M, R>>, so we must flatten it one last time.
var flattenedFinalResultET = eitherTMonad.flatMap(x -> x, workflow);
var finalConcreteET = EITHER_T.narrow(flattenedFinalResultET);
return finalConcreteET.value();
There is a lot going on in the For comprehension so lets try and unpick it.
Breakdown of the For Comprehension:
For.from(eitherTMonad, eitherTMonad.of(initialContext)): The comprehension is initiated with a starting value. We lift the initialWorkflowContextinto ourEitherTmonad, representing a successful, asynchronous starting point:Future<Right(initialContext)>..from(ctx1 -> ...)(Validation):- Purpose: Validates the basic order data.
- Sync/Async: Synchronous.
steps.validateOrderreturnsKind<EitherKind.Witness<DomainError>, ValidatedOrder>. - HKT Integration: The
Eitherresult is lifted into theEitherT<CompletableFuture, ...>context usingEitherT.fromEither(...). This wraps the immediateEitherresult in a completedCompletableFuture. - Error Handling: If validation fails,
validateOrderreturns aLeft(ValidationError). This becomes aFuture<Left(ValidationError)>, and theForcomprehension automatically short-circuits, skipping all subsequent steps.
.from(t -> ...)(Inventory Check):- Purpose: Asynchronously checks if the product is in stock.
- Sync/Async: Asynchronous.
steps.checkInventoryAsyncreturnsKind<CompletableFutureKind.Witness, Either<DomainError, Unit>>. - HKT Integration: The
Kindreturned by the async step is directly wrapped intoEitherTusingEitherT.fromKind(...). - Error Handling: Propagates
Left(StockError)or underlyingCompletableFuturefailures.
.from(t -> ...)(Payment):- Purpose: Asynchronously processes the payment.
- Sync/Async: Asynchronous.
- HKT Integration & Error Handling: Works just like the inventory check, propagating
Left(PaymentError)orCompletableFuturefailures.
.from(t -> ...)(Shipment with Recovery):- Purpose: Asynchronously creates a shipment.
- HKT Integration: Uses
EitherT.fromKindandeitherTMonad.handleErrorWith. - Error Handling & Recovery: If
createShipmentAsyncreturns aLeft(ShippingError("Temporary Glitch")), thehandleErrorWithblock catches it and returns a successfulEitherTwith default shipment info, allowing the workflow to proceed. All other errors are propagated.
.yield(t -> ...)(Final Result and Notification):- Purpose: The final block of the
Forcomprehension. It takes the accumulated results from all previous steps (in a tuplet) and produces the final result of the entire chain. - Logic:
- It constructs the
FinalResultfrom the successfulWorkflowContext. - It attempts the final, non-critical notification step (
notifyCustomerAsync). - Crucially, it uses
handleErroron the notification result. If notification fails, it logs a warning but recovers to aRight(Unit.INSTANCE), ensuring the overall workflow remains successful. - It then maps the result of the recovered notification step back to the
FinalResult, which becomes the final value of the entire comprehension.
- It constructs the
- Purpose: The final block of the
- Final
flatMapand Unwrapping:- The
yieldblock itself can return a monadic value. To get the final, single-layer result, we do one lastflatMapover theForcomprehension's result. - Finally,
EITHER_T.narrow(...)and.value()are used to extract the underlyingKind<CompletableFutureKind.Witness, Either<...>>from theEitherTrecord. Themainmethod inOrderWorkflowRunnerthen usesFUTURE.narrow()and.join()to get the finalEitherresult for printing.
- The
Alternative: Handling Exceptions with Try (Workflow2.java)
The OrderWorkflowRunner also initialises and can run Workflow2. This workflow is identical to Workflow1 except for the first step. It demonstrates how to integrate synchronous code that might throw exceptions.
// From Workflow2.run(), inside the first .from(...)
.from(ctx1 -> {
var tryResult = TRY.narrow(steps.validateOrderWithTry(ctx1.initialData()));
var eitherResult = tryResult.toEither(
throwable -> (DomainError) new DomainError.ValidationError(throwable.getMessage()));
var validatedOrderET = EitherT.fromEither(futureMonad, eitherResult);
// ... map context ...
})
- The
steps.validateOrderWithTrymethod is designed to throw exceptions on validation failure (e.g.,IllegalArgumentException). TRY.tryOf(...)inOrderWorkflowStepswraps this potentially exception-throwing code, returning aKind<TryKind.Witness, ValidatedOrder>.- In
Workflow2, wenarrowthis to a concreteTry<ValidatedOrder>. - We use
tryResult.toEither(...)to convert theTryinto anEither<DomainError, ValidatedOrder>:- A
Try.Success(validatedOrder)becomesEither.right(validatedOrder). - A
Try.Failure(throwable)is mapped to anEither.left(new DomainError.ValidationError(throwable.getMessage())).
- A
- The resulting
Eitheris then lifted intoEitherTusingEitherT.fromEither, and the rest of the workflow proceeds as before.
This demonstrates a practical pattern for integrating synchronous, exception-throwing code into the EitherT-based workflow by explicitly converting failures into your defined DomainError types.
This example illustrates several powerful patterns enabled by Higher-Kinded-J:
EitherTforFuture<Either<Error, Value>>: This is the core pattern. UseEitherTwhenever you need to sequence asynchronous operations (CompletableFuture) where each step can also fail with a specific, typed error (Either).- Instantiate
EitherTMonad<F_OUTER_WITNESS, L_ERROR>with theMonad<F_OUTER_WITNESS>instance for your outer monad (e.g.,CompletableFutureMonad). - Use
eitherTMonad.flatMapor aForcomprehension to chain steps. - Lift async results (
Kind<F_OUTER_WITNESS, Either<L, R>>) intoEitherTusingEitherT.fromKind. - Lift sync results (
Either<L, R>) intoEitherTusingEitherT.fromEither. - Lift pure values (
R) intoEitherTusingeitherTMonad.oforEitherT.right. - Lift errors (
L) intoEitherTusingeitherTMonad.raiseErrororEitherT.left.
- Instantiate
- Typed Domain Errors: Use
Either(often with a sealed interface likeDomainErrorfor theLefttype) to represent expected business failures clearly. This improves type safety and makes error handling more explicit. - Error Recovery: Use
eitherTMonad.handleErrorWith(for complex recovery returning anotherEitherT) orhandleError(for simpler recovery to a pure value for theRightside) to inspectDomainErrors and potentially recover, allowing the workflow to continue gracefully. - Integrating
Try: If dealing with synchronous legacy code or libraries that throw exceptions, wrap calls usingTRY.tryOf. Then,narrowtheTryand usetoEither(orfold) to convertTry.Failureinto an appropriateEither.Left<DomainError>before lifting intoEitherT. - Dependency Injection: Pass necessary dependencies (loggers, service clients, configurations) into your workflow steps (e.g., via a constructor and a
Dependenciesrecord). This promotes loose coupling and testability. - Structured Logging: Use an injected logger within steps to provide visibility into the workflow's progress and state without tying the steps to a specific logging implementation (like
System.out). varfor Conciseness: Utilise Java'svarfor local variable type inference where the type is clear from the right-hand side of an assignment. This can reduce verbosity, especially with complex generic types common in HKT.
While this example covers a the core concepts, a real-world application might involve more complexities. Here are some areas to consider for further refinement:
- More Sophisticated Error Handling/Retries:
- Retry Mechanisms: For transient errors (like network hiccups or temporary service unavailability), you might implement retry logic. This could involve retrying a failed async step a certain number of times with exponential backoff. While
higher-kinded-jitself doesn't provide specific retry utilities, you could integrate libraries like Resilience4j or implement custom retry logic within aflatMaporhandleErrorWithblock. - Compensating Actions (Sagas): If a step fails after previous steps have caused side effects (e.g., payment succeeds, but shipment fails irrevocably), you might need to trigger compensating actions (e.g., refund payment). This often leads to more complex Saga patterns.
- Retry Mechanisms: For transient errors (like network hiccups or temporary service unavailability), you might implement retry logic. This could involve retrying a failed async step a certain number of times with exponential backoff. While
- Configuration of Services:
- The
Dependenciesrecord currently only holds a logger. In a real application, it would also provide configured instances of service clients (e.g.,InventoryService,PaymentGatewayClient,ShippingServiceClient). These clients would be interfaces, with concrete implementations (real or mock for testing) injected.
- The
- Parallel Execution of Independent Steps:
- If some workflow steps are independent and can be executed concurrently, you could leverage
CompletableFuture.allOf(to await all) orCompletableFuture.thenCombine(to combine results of two). - Integrating these with
EitherTwould require careful management of theEitherresults from parallel futures. For instance, if you run twoEitherToperations in parallel, you'd get twoCompletableFuture<Either<DomainError, ResultX>>. You would then need to combine these, deciding how to aggregate errors if multiple occur, or how to proceed if one fails and others succeed.
- If some workflow steps are independent and can be executed concurrently, you could leverage
- Transactionality:
- For operations requiring atomicity (all succeed or all fail and roll back), traditional distributed transactions are complex. The Saga pattern mentioned above is a common alternative for managing distributed consistency.
- Individual steps might interact with transactional resources (e.g., a database). The workflow itself would coordinate these, but doesn't typically manage a global transaction across disparate async services.
- More Detailed & Structured Logging:
- The current logging is simple string messages. For better observability, use a structured logging library (e.g., SLF4J with Logback/Log4j2) and log key-value pairs (e.g.,
orderId,stepName,status,durationMs,errorTypeif applicable). This makes logs easier to parse, query, and analyse. - Consider logging at the beginning and end of each significant step, including the outcome (success/failure and error details).
- The current logging is simple string messages. For better observability, use a structured logging library (e.g., SLF4J with Logback/Log4j2) and log key-value pairs (e.g.,
- Metrics & Monitoring:
- Instrument the workflow to emit metrics (e.g., using Micrometer). Track things like workflow execution time, step durations, success/failure counts for each step, and error rates. This is crucial for monitoring the health and performance of the system.
Higher-Kinded-J can help build more robust, resilient, and observable workflows using these foundational patterns from this example.
Building a Playable Draughts Game

This tutorial will guide you through building a complete and playable command-line draughts (checkers) game.
We will provide all the necessary code, broken down into manageable files. More importantly, we will demonstrate how higher-kinded-j makes this process more robust, maintainable, and functionally elegant by cleanly separating game logic, user interaction, and state management.
The Functional Approach
At its core, a game like draughts involves several key aspects where functional patterns can shine:
- State Management: The board, the position of pieces, whose turn it is – this is all game state. Managing this immutably can prevent a host of bugs.
- User Input: Players will enter moves, which might be valid, invalid, or incorrectly formatted.
- Game Logic: Operations like validating a move, capturing a piece, checking for kings, or determining a winner.
- Side Effects: Interacting with the console for input and output.
higher-kinded-j provides monads that are perfect for these tasks:
StateMonad: For cleanly managing and transitioning the game state without mutable variables.EitherMonad: For handling input parsing and move validation, clearly distinguishing between success and different kinds of errors.IOMonad: For encapsulating side effects like reading from and printing to the console, keeping the core logic pure.ForComprehension: To flatten sequences of monadic operations (flatMapcalls) into a more readable, sequential style.
By using these, we can build a more declarative and composable game.
The Complete Code
you can find the complete code in the package:
Step 1: Core Concepts Quick Recap
Before we write game code, let's briefly revisit whyhigher-kinded-j is necessary. Java doesn't let us write, for example, a generic function that works for any container F<A> (like List<A> or Optional<A>). higher-kinded-j simulates this with:
Kind<F, A>: A bridge interface representing a typeAwithin a contextF.- Witness Types: Marker types that stand in for
F(the type constructor). - Type Classes: Interfaces like
Functor,Applicative,Monad, andMonadErrorthat define operations (likemap,flatMap,handleErrorWith) which work over theseKinds.
For a deeper dive, check out the Core Concepts of Higher-Kinded-J and the Usage Guide.
Step 2: Defining the Draughts Game State
Our game state needs to track the board, pieces, and current player. First, we need to define the core data structures of our game. These are simple, immutable records represent the game's state.
// Enum for the two players
enum Player { RED, BLACK }
// Enum for the type of piece
enum PieceType { MAN, KING }
// A piece on the board, owned by a player with a certain type
record Piece(Player owner, PieceType type) {}
// A square on the 8x8 board, identified by row and column
record Square(int row, int col) {
@Override
public @NonNull String toString() {
return "" + (char)('a' + col) + (row + 1);
}
}
// Represents an error during move parsing or validation
record GameError(String description) {}
// The command to make a move from one square to another
record MoveCommand(Square from, Square to) {}
// The outcome of a move attempt
enum MoveOutcome { SUCCESS, INVALID_MOVE, CAPTURE_MADE, GAME_WON }
record MoveResult(MoveOutcome outcome, String message) {}
We can define a GameState record:
// The complete, immutable state of the game at any point in time
public record GameState(Map<Square, Piece> board, Player currentPlayer, String message, boolean isGameOver) {
public static GameState initial() {
Map<Square, Piece> startingBoard = new HashMap<>();
// Place BLACK pieces
for (int r = 0; r < 3; r++) {
for (int c = (r % 2 != 0) ? 0 : 1; c < 8; c += 2) {
startingBoard.put(new Square(r, c), new Piece(Player.BLACK, PieceType.MAN));
}
}
// Place RED pieces
for (int r = 5; r < 8; r++) {
for (int c = (r % 2 != 0) ? 0 : 1; c < 8; c += 2) {
startingBoard.put(new Square(r, c), new Piece(Player.RED, PieceType.MAN));
}
}
return new GameState(Collections.unmodifiableMap(startingBoard), Player.RED, "Game started. RED's turn.", false);
}
GameState withBoard(Map<Square, Piece> newBoard) {
return new GameState(Collections.unmodifiableMap(newBoard), this.currentPlayer, this.message, this.isGameOver);
}
GameState withCurrentPlayer(Player nextPlayer) {
return new GameState(this.board, nextPlayer, this.message, this.isGameOver);
}
GameState withMessage(String newMessage) {
return new GameState(this.board, this.currentPlayer, newMessage, this.isGameOver);
}
GameState withGameOver() {
return new GameState(this.board, this.currentPlayer, this.message, true);
}
GameState togglePlayer() {
Player next = (this.currentPlayer == Player.RED) ? Player.BLACK : Player.RED;
return withCurrentPlayer(next).withMessage(next + "'s turn.");
}
}
We'll use the State<S, A> monad from higher-kinded-j to manage this GameState. A State<GameState, A> represents a computation that takes an initial GameState and produces a result A along with a new, updated GameState. Explore the State Monad documentation for more.
Step 3: Handling User Input with IO and Either
This class handles reading user input from the console. The readMoveCommand method returns an IO<Either<GameError, MoveCommand>>. This type signature is very descriptive: it tells us the action is an IO side effect, and its result will be either a GameError or a valid MoveCommand.
class InputHandler {
private static final Scanner scanner = new Scanner(System.in);
static Kind<IOKind.Witness, Either<GameError, MoveCommand>> readMoveCommand() {
return IOKindHelper.IO_OP.delay(() -> {
System.out.print("Enter move for " + " (e.g., 'a3 b4') or 'quit': ");
String line = scanner.nextLine();
if ("quit".equalsIgnoreCase(line.trim())) {
return Either.left(new GameError("Player quit the game."));
}
String[] parts = line.trim().split("\\s+");
if (parts.length != 2) {
return Either.left(new GameError("Invalid input. Use 'from to' format (e.g., 'c3 d4')."));
}
try {
Square from = parseSquare(parts[0]);
Square to = parseSquare(parts[1]);
return Either.right(new MoveCommand(from, to));
} catch (IllegalArgumentException e) {
return Either.left(new GameError(e.getMessage()));
}
});
}
private static Square parseSquare(String s) throws IllegalArgumentException {
if (s == null || s.length() != 2) throw new IllegalArgumentException("Invalid square format: " + s);
char colChar = s.charAt(0);
char rowChar = s.charAt(1);
if (colChar < 'a' || colChar > 'h' || rowChar < '1' || rowChar > '8') {
throw new IllegalArgumentException("Square out of bounds (a1-h8): " + s);
}
int col = colChar - 'a';
int row = rowChar - '1';
return new Square(row, col);
}
}
Learn more about the IO Monad and Either Monad.
Step 4: Game Logic as State Transitions
This is the heart of our application, containing the complete rules of draughts. The applyMove method takes a MoveCommand and returns a State computation. This computation, when run, will validate the move against the current GameState, and if valid, produce a MoveResult and the new GameState. This entire class has no side effects.
public class GameLogicSimple {
static Kind<StateKind.Witness<GameState>, MoveResult> applyMove(MoveCommand command) {
return StateKindHelper.STATE.widen(
State.of(
currentState -> {
// Unpack command for easier access
Square from = command.from();
Square to = command.to();
Piece piece = currentState.board().get(from);
String invalidMsg; // To hold error messages
// Validate the move based on currentState and command
// - Is it the current player's piece?
// - Is the move diagonal?
// - Is the destination square empty or an opponent's piece for a jump?
if (piece == null) {
invalidMsg = "No piece at " + from;
return new StateTuple<>(
new MoveResult(MoveOutcome.INVALID_MOVE, invalidMsg),
currentState.withMessage(invalidMsg));
}
if (piece.owner() != currentState.currentPlayer()) {
invalidMsg = "Not your piece.";
return new StateTuple<>(
new MoveResult(MoveOutcome.INVALID_MOVE, invalidMsg),
currentState.withMessage(invalidMsg));
}
if (currentState.board().containsKey(to)) {
invalidMsg = "Destination square " + to + " is occupied.";
return new StateTuple<>(
new MoveResult(MoveOutcome.INVALID_MOVE, invalidMsg),
currentState.withMessage(invalidMsg));
}
int rowDiff = to.row() - from.row();
int colDiff = to.col() - from.col();
// Simple move or jump?
if (Math.abs(rowDiff) == 1 && Math.abs(colDiff) == 1) { // Simple move
if (piece.type() == PieceType.MAN) {
if ((piece.owner() == Player.RED && rowDiff > 0)
|| (piece.owner() == Player.BLACK && rowDiff < 0)) {
invalidMsg = "Men can only move forward.";
return new StateTuple<>(
new MoveResult(MoveOutcome.INVALID_MOVE, invalidMsg),
currentState.withMessage(invalidMsg));
}
}
return performMove(currentState, command, piece);
} else if (Math.abs(rowDiff) == 2 && Math.abs(colDiff) == 2) { // Jump move
Square jumpedSquare =
new Square(from.row() + rowDiff / 2, from.col() + colDiff / 2);
Piece jumpedPiece = currentState.board().get(jumpedSquare);
if (jumpedPiece == null || jumpedPiece.owner() == currentState.currentPlayer()) {
invalidMsg = "Invalid jump. Must jump over an opponent's piece.";
return new StateTuple<>(
new MoveResult(MoveOutcome.INVALID_MOVE, invalidMsg),
currentState.withMessage(invalidMsg));
}
return performJump(currentState, command, piece, jumpedSquare);
} else {
invalidMsg = "Move must be diagonal by 1 or 2 squares.";
return new StateTuple<>(
new MoveResult(MoveOutcome.INVALID_MOVE, invalidMsg),
currentState.withMessage(invalidMsg));
}
}));
}
private static StateTuple<GameState, MoveResult> performMove(
GameState state, MoveCommand command, Piece piece) {
Map<Square, Piece> newBoard = new HashMap<>(state.board());
newBoard.remove(command.from());
newBoard.put(command.to(), piece);
GameState movedState = state.withBoard(newBoard);
GameState finalState = checkAndKingPiece(movedState, command.to());
return new StateTuple<>(
new MoveResult(MoveOutcome.SUCCESS, "Move successful."), finalState.togglePlayer());
}
private static StateTuple<GameState, MoveResult> performJump(
GameState state, MoveCommand command, Piece piece, Square jumpedSquare) {
Map<Square, Piece> newBoard = new HashMap<>(state.board());
newBoard.remove(command.from());
newBoard.remove(jumpedSquare);
newBoard.put(command.to(), piece);
GameState jumpedState = state.withBoard(newBoard);
GameState finalState = checkAndKingPiece(jumpedState, command.to());
// Check for win condition
boolean blackWins =
finalState.board().values().stream().noneMatch(p -> p.owner() == Player.RED);
boolean redWins =
finalState.board().values().stream().noneMatch(p -> p.owner() == Player.BLACK);
if (blackWins || redWins) {
String winner = blackWins ? "BLACK" : "RED";
return new StateTuple<>(
new MoveResult(MoveOutcome.GAME_WON, winner + " wins!"),
finalState.withGameOver().withMessage(winner + " has captured all pieces!"));
}
return new StateTuple<>(
new MoveResult(MoveOutcome.CAPTURE_MADE, "Capture successful."), finalState.togglePlayer());
}
private static GameState checkAndKingPiece(GameState state, Square to) {
Piece piece = state.board().get(to);
if (piece != null && piece.type() == PieceType.MAN) {
// A RED piece is kinged on row index 0 (the "1st" row).
// A BLACK piece is kinged on row index 7 (the "8th" row).
if ((piece.owner() == Player.RED && to.row() == 0)
|| (piece.owner() == Player.BLACK && to.row() == 7)) {
Map<Square, Piece> newBoard = new HashMap<>(state.board());
newBoard.put(to, new Piece(piece.owner(), PieceType.KING));
return state
.withBoard(newBoard)
.withMessage(piece.owner() + "'s piece at " + to + " has been kinged!");
}
}
return state;
}
}
This uses State.of to create a stateful computation. State.get(), State.set(), and State.modify() are other invaluable tools from the State monad.
Step 5: Composing with flatMap - The Monadic Power
Now, we combine these pieces. The main loop needs to:
- Display the board (
IO). - Read user input (
IO). - If the input is valid, apply it to the game logic (
State). - Loop with the new game state.
This sequence of operations is a good use case for a For comprehension to improve on nested flatMap calls.
Here's how we compose these pieces together in the main game loop:
public class Draughts {
private static final IOMonad ioMonad = IOMonad.INSTANCE;
// Processes a single turn of the game
private static Kind<IOKind.Witness, GameState> processTurn(GameState currentGameState) {
// 1. Use 'For' to clearly sequence the display and read actions.
var sequence = For.from(ioMonad, BoardDisplay.displayBoard(currentGameState))
.from(ignored -> InputHandler.readMoveCommand())
.yield((ignored, eitherResult) -> eitherResult); // Yield the result of the read action
// 2. The result of the 'For' is an IO<Either<...>>.
// Now, flatMap that single result to handle the branching.
return ioMonad.flatMap(
eitherResult ->
eitherResult.fold(
error -> { // Left case: Input error
return IOKindHelper.IO_OP.delay(
() -> {
System.out.println("Error: " + error.description());
return currentGameState;
});
},
moveCommand -> { // Right case: Valid input
var stateComputation = GameLogic.applyMove(moveCommand);
var resultTuple = StateKindHelper.STATE.runState(stateComputation, currentGameState);
return ioMonad.of(resultTuple.state());
}),
sequence);
}
// other methods....
}
The For comprehension flattens the display -> read sequence, making the primary workflow more declarative and easier to read than nested callbacks.
The Order Processing Example in the higher-kinded-j docs shows a more complex scenario using CompletableFuture and EitherT, which is a great reference for getting started with monad transformers.
Step 6: The Game Loop
public class Draughts {
private static final IOMonad ioMonad = IOMonad.INSTANCE;
// The main game loop as a single, recursive IO computation
private static Kind<IOKind.Witness, Unit> gameLoop(GameState gameState) {
if (gameState.isGameOver()) {
// Base case: game is over, just display the final board and message.
return BoardDisplay.displayBoard(gameState);
}
// Recursive step: process one turn and then loop with the new state
return ioMonad.flatMap(Draughts::gameLoop, processTurn(gameState));
}
// processTurn as before....
public static void main(String[] args) {
// Get the initial state
GameState initialState = GameState.initial();
// Create the full game IO program
Kind<IOKind.Witness, Unit> fullGame = gameLoop(initialState);
// Execute the program. This is the only place where side effects are actually run.
IOKindHelper.IO_OP.unsafeRunSync(fullGame);
System.out.println("Thank you for playing!");
}
}
Key methods like IOKindHelper.IO_OP.unsafeRunSync() and StateKindHelper.STATE.runState() are used to execute the monadic computations at the "edge" of the application.
Step 7: Displaying the Board
A simple text representation will do the trick.
This class is responsible for rendering the GameState to the console. Notice how the displayBoard method doesn't perform the printing directly; it returns an IO<Unit> which is a description of the printing action. This keeps the method pure.
public class BoardDisplay {
public static Kind<IOKind.Witness, Unit> displayBoard(GameState gameState) {
return IOKindHelper.IO_OP.delay(
() -> {
System.out.println("\n a b c d e f g h");
System.out.println(" +-----------------+");
for (int r = 7; r >= 0; r--) { // Print from row 8 down to 1
System.out.print((r + 1) + "| ");
for (int c = 0; c < 8; c++) {
Piece p = gameState.board().get(new Square(r, c));
if (p == null) {
System.out.print(". ");
} else {
char pieceChar = (p.owner() == Player.RED) ? 'r' : 'b';
if (p.type() == PieceType.KING) pieceChar = Character.toUpperCase(pieceChar);
System.out.print(pieceChar + " ");
}
}
System.out.println("|" + (r + 1));
}
System.out.println(" +-----------------+");
System.out.println(" a b c d e f g h");
System.out.println("\n" + gameState.message());
if (!gameState.isGameOver()) {
System.out.println("Current Player: " + gameState.currentPlayer());
}
return Unit.INSTANCE;
});
}
}
Playing the game

In the game we can see the black has "kinged" a piece by reaching e8.
Step 8: Refactoring for Multiple Captures
A key rule in draughts is that if a capture is available, it must be taken, and if a capture leads to another possible capture for the same piece, that jump must also be taken.
The beauty of our functional approach is that we only need to modify the core rules in GameLogic.java. The Draughts.java game loop, the IO handlers, and the data models don't need to change at all.
The core idea is to modify the performJump method. After a jump is completed, we will check if the piece that just moved can make another jump from its new position.
We do this by adding a helper canPieceJump and modify performJump to check for subsequent jumps.
If another jump is possible, the player's turn does not end., we will update the board state but not switch the current player, forcing them to make another capture. If another jump is not possible, we will switch the player as normal.
/** Check if a piece at a given square has any valid jumps. */
private static boolean canPieceJump(GameState state, Square from) {
Piece piece = state.board().get(from);
if (piece == null) return false;
int[] directions = {-2, 2};
for (int rowOffset : directions) {
for (int colOffset : directions) {
if (piece.type() == PieceType.MAN) {
if ((piece.owner() == Player.RED && rowOffset > 0)
|| (piece.owner() == Player.BLACK && rowOffset < 0)) {
continue; // Invalid forward direction for man
}
}
Square to = new Square(from.row() + rowOffset, from.col() + colOffset);
if (to.row() < 0
|| to.row() > 7
|| to.col() < 0
|| to.col() > 7
|| state.board().containsKey(to)) {
continue; // Off board or destination occupied
}
Square jumpedSquare = new Square(from.row() + rowOffset / 2, from.col() + colOffset / 2);
Piece jumpedPiece = state.board().get(jumpedSquare);
if (jumpedPiece != null && jumpedPiece.owner() != piece.owner()) {
return true; // Found a valid jump
}
}
}
return false;
}
/** Now it checks for further jumps after a capture. */
private static StateTuple<GameState, MoveResult> performJump(
GameState state, MoveCommand command, Piece piece, Square jumpedSquare) {
// Perform the jump and update board
Map<Square, Piece> newBoard = new HashMap<>(state.board());
newBoard.remove(command.from());
newBoard.remove(jumpedSquare);
newBoard.put(command.to(), piece);
GameState jumpedState = state.withBoard(newBoard);
// Check for kinging after the jump
GameState stateAfterKinging = checkAndKingPiece(jumpedState, command.to());
// Check for win condition after the capture
boolean blackWins =
!stateAfterKinging.board().values().stream().anyMatch(p -> p.owner() == Player.RED);
boolean redWins =
!stateAfterKinging.board().values().stream().anyMatch(p -> p.owner() == Player.BLACK);
if (blackWins || redWins) {
String winner = blackWins ? "BLACK" : "RED";
return new StateTuple<>(
new MoveResult(MoveOutcome.GAME_WON, winner + " wins!"),
stateAfterKinging.withGameOver().withMessage(winner + " has captured all pieces!"));
}
// Check if the same piece can make another jump
boolean anotherJumpPossible = canPieceJump(stateAfterKinging, command.to());
if (anotherJumpPossible) {
// If another jump exists, DO NOT toggle the player.
// Update the message to prompt for the next jump.
String msg = "Capture successful. You must jump again with the same piece.";
return new StateTuple<>(
new MoveResult(MoveOutcome.CAPTURE_MADE, msg), stateAfterKinging.withMessage(msg));
} else {
// No more jumps, so end the turn and toggle the player.
return new StateTuple<>(
new MoveResult(MoveOutcome.CAPTURE_MADE, "Capture successful."),
stateAfterKinging.togglePlayer());
}
}
Why This Functional Approach is Better
Having seen the complete code, let's reflect on the benefits:
- Testability: The
GameLogicclass is completely pure, that is to say it has no side effects and doesn't depend onSystem.inorSystem.out. You can test the entire rules engine simply by providing aGameStateand aMoveCommand, then asserting on the resultingGameStateandMoveResult. This is significantly easier than testing code that's tangled with console I/O. - Composability: The
gameLoopinDraughts.javais a beautiful example of composition. It clearly and declaratively lays out the sequence of events for a game turn:display -> read -> process. TheflatMapcalls hide all the messy details of passing state and results from one step to the next. - Reasoning: The type signatures tell a story.
IO<Either<GameError, MoveCommand>>is far more descriptive than a method that returns aMoveCommandbut might throw an exception or returnnull. It explicitly forces the caller to handle both the success and error cases. - Maintainability: If you want to change from a command-line interface to a graphical one, you only need to replace
BoardDisplayandInputHandler. The entire coreGameLogicremains untouched because it's completely decoupled from the presentation layer.
This tutorial has only scratched the surface. You could extend this by exploring other constructs from the library, like using Validated to accumulate multiple validation errors or using the Reader monad to inject different sets of game rules.
Java may not have native HKTs, but with Higher-Kinded-J, you can absolutely utilise these powerful and elegant functional patterns to write better, more robust applications.
An Introduction to Optics

As Java developers, we appreciate the safety and predictability of immutable objects, especially with the introduction of records. However, this safety comes at a cost: updating nested immutable data can be a verbose and error-prone nightmare.
Consider a simple nested record structure:
record Street(String name, int number) {}
record Address(Street street, String city) {}
record User(String name, Address address) {}
How do you update the user's street name? In standard Java, you're forced into a "copy-and-update" cascade:
// What most Java developers actually write
public User updateStreetName(User user, String newStreetName) {
var address = user.address();
var street = address.street();
var newStreet = new Street(newStreetName, street.number());
var newAddress = new Address(newStreet, address.city());
return new User(user.name(), newAddress);
}
This is tedious, hard to read, and gets exponentially worse with deeper nesting. What if there was a way to "zoom in" on the data you want to change, update it, and get a new copy of the top-level object back, all in one clean operation?
This is the problem that Optics solve.
What Are Optics?
At their core, optics are simply composable, functional getters and setters for immutable data structures.
Think of an optic as a zoom lens for your data. It's a first-class object that represents a path from a whole structure (like User) to a specific part (like the street name). Because it's an object, you can pass it around, compose it with other optics, and use it to perform functional updates.
Think of Optics Like...
- Lens: A magnifying glass that focuses on one specific part 🔎
- Prism: A tool that splits light, but only works with certain types of light 🔬
- Iso: A universal translator between equivalent languages 🔄
- Traversal: A spotlight that can illuminate many targets at once 🗺️
- Fold: A read-only query tool that extracts and aggregates data 📊
Every optic provides two basic capabilities:
get: Focus on a structureSand retrieve a partA.set: Focus on a structureS, provide a new partA, and receive a newSwith the part updated. This is always an immutable operation —> a new copy ofSis returned.
The real power comes from their composability. You can chain optics together to peer deeply into nested structures and perform targeted updates with ease.
The Optics Family in Higher-Kinded-J
The higher-kinded-j library provides the foundation for a rich optics library, primarily focused on three main types. Each is designed to solve a specific kind of data access problem.
1. Lens: For "Has-A" Relationships 🔎
A Lens is the most common optic. It focuses on a single, required piece of data within a larger "product type" (a record or class with fields). It's for data that is guaranteed to exist.
-
Problem it solves: Getting and setting a field within an object, especially a deeply nested one.
-
Generated Code: Annotating a record with
@GenerateLensesproduces a companion class (e.g.,UserLenses) that contains:- A lens for each field (e.g.,
UserLenses.address()). - Convenient
with*helper methods for easy updates (e.g.,UserLenses.withAddress(...)).
- A lens for each field (e.g.,
-
Example (Deep Update with Lenses):
- To solve our initial problem of updating the user's street name, we compose lenses:
// Compose lenses to create a direct path to the nested data
var userToStreetName = UserLenses.address()
.andThen(AddressLenses.street())
.andThen(StreetLenses.name());
// Perform the deep update in a single, readable line
User updatedUser = userToStreetName.set("New Street", userLogin);
-
Example (Shallow Update with
with*Helpers):- For simple, top-level updates, the
with*methods are more direct and discoverable.
- For simple, top-level updates, the
// Before: Using the lens directly
User userWithNewName = UserLenses.name().set("Bob", userLogin);
// After: Using the generated helper method
User userWithNewName = UserLenses.withName(userLogin, "Bob");
2. Iso: For "Is-Equivalent-To" Relationships 🔄
An Iso (Isomorphism) is a special, reversible optic. It represents a lossless, two-way conversion between two types that hold the exact same information. Think of it as a type-safe, composable adapter.
-
Problem it solves: Swapping between different representations of the same data, such as a wrapper class and its raw value, or between two structurally different but informationally equivalent records.
-
Example: Suppose you have a
Pointrecord and aTuple2<Integer, Integer>, which are structurally different but hold the same data.public record Point(int x, int y) {}You can define an
Isoto convert between them:@GenerateIsos public static Iso<Point, Tuple2<Integer, Integer>> pointToTuple() { return Iso.of( point -> Tuple.of(point.x(), point.y()), // get tuple -> new Point(tuple._1(), tuple._2()) // reverseGet ); }This
Isocan now be composed with other optics to, for example, create aLensthat goes from aPointdirectly to its first element inside aTuplerepresentation.
3. Prism: For "Is-A" Relationships 🔬
A Prism is like a Lens, but for "sum types" (sealed interface or enum). It focuses on a single, possible case of a type. A Prism's get operation can fail (it returns an Optional), because the data might not be the case you're looking for. Think of it as a type-safe, functional instanceof and cast.
- Problem it solves: Safely operating on one variant of a sealed interface.
- Example: Instead of using an
if-instanceofchain to handle a specificDomainError:
// Using a generated Prism for a sealed interface
DomainErrorPrisms.shippingError()
.getOptional(error) // Safely gets a ShippingError if it matches
.filter(ShippingError::isRecoverable)
.ifPresent(this::handleRecovery); // Perform action only if it's the right type
4. Traversal: For "Has-Many" Relationships 🗺️
A Traversal is an optic that can focus on multiple targets at once—typically all the items within a collection inside a larger structure.
-
Problem it solves: Applying an operation to every element in a
List,Set, or other collection that is a field within an object. -
Example: To validate a list of promo codes in an order with
Validated:@GenerateTraversals public record OrderData(..., List<String> promoCodes) {} var codesTraversal = OrderDataTraversals.promoCodes(); // returns Validated<Error, Code> var validationFunction = (String code) -> validate(code); // Use the traversal to apply the function to every code. // The Applicative for Validated handles the error accumulation automatically. Validated<Error, OrderData> result = codesTraversal.modifyF( validationFunction, orderData, validatedApplicative );
5. Fold: For "Has-Many" Queries 📊
A Fold is a read-only optic designed specifically for querying and extracting data without modification. Think of it as a Traversal that has given up the ability to modify in exchange for a clearer expression of intent and additional query-focused operations.
-
Problem it solves: Extracting information from complex data structures—finding items, checking conditions, aggregating values, or collecting data without modifying the original structure.
-
Generated Code: Annotating a record with
@GenerateFoldsproduces a companion class (e.g.,OrderFolds) with aFoldfor each field. -
Example (Querying Product Catalogue):
- To find all products in an order that cost more than £50:
// Get the generated fold
Fold<Order, Product> orderToProducts = OrderFolds.items();
// Find all matching products
List<Product> expensiveItems = orderToProducts.getAll(order).stream()
.filter(product -> product.price() > 50.00)
.collect(toList());
// Or check if any exist
boolean hasExpensiveItems = orderToProducts.exists(
product -> product.price() > 50.00,
order
);
- Key Operations:
getAll(source): Extract all focused values into aListpreview(source): Get the first value as anOptionalfind(predicate, source): Find first matching valueexists(predicate, source): Check if any value matchesall(predicate, source): Check if all values matchisEmpty(source): Check if there are zero focused valueslength(source): Count the number of focused values
Why Fold is Important: While Traversal can do everything Fold can do, using Fold makes your code's intent crystal clear—"I'm only reading this data, not modifying it." This is valuable for code reviewers, for preventing accidental mutations, and for expressing domain logic where queries should be separated from commands (CQRS pattern).
Advanced Capabilities: Profunctor Adaptations
One of the most powerful features of higher-kinded-j optics is their profunctor nature. Every optic can be adapted to work with different source and target types using three key operations:
contramap: Adapt an optic to work with a different source typemap: Transform the result type of an opticdimap: Adapt both source and target types simultaneously
This makes optics incredibly flexible for real-world scenarios like API integration, legacy system support, and working with different data representations. For a detailed exploration of these capabilities, see the Profunctor Optics Guide.
How higher-kinded-j Provides Optics
This brings us to the unique advantages higher-kinded-j offers for optics in Java.
- An Annotation-Driven Workflow: Manually writing optics is boilerplate. The
higher-kinded-japproach automates this. By simply adding an annotation (@GenerateLenses,@GeneratePrisms, etc.) to your data classes, you get fully-functional, type-safe optics for free. This is a massive productivity boost and eliminates a major barrier to using optics in Java. - Higher-Kinded Types for Effectful Updates: This is the most powerful feature. Because
higher-kinded-jprovides an HKT abstraction (Kind<F, A>) and type classes likeFunctorandApplicative, the optics can perform effectful modifications. ThemodifyFmethod is generic over anApplicativeeffectF. This means you can perform an update within the context of any data type that has anApplicativeinstance:- Want to perform an update that might fail? Use
OptionalorEitheras yourF. - Want to perform an asynchronous update? Use
CompletableFutureas yourF. - Want to accumulate validation errors? Use
Validatedas yourF.
- Want to perform an update that might fail? Use
- Profunctor Adaptability: Every optic is fundamentally a profunctor, meaning it can be adapted to work with different data types and structures. This provides incredible flexibility for integrating with external systems, handling legacy data formats, and working with strongly-typed wrappers.
Common Patterns
When to Use with* Helpers vs Manual Lenses
- Use
with*helpers for simple, top-level field updates - Use composed lenses for deep updates or when you need to reuse the path
- Use manual lens creation for computed properties or complex transformations
Decision Guide
- Need to focus on a required field? → Lens
- Need to work with optional variants? → Prism
- Need to convert between equivalent types? → Iso
- Need to modify collections? → Traversal
- Need to query or extract data without modification? → Fold
- Need to adapt existing optics? → Profunctor operations
Common Pitfalls
❌ Don't do this:
java
// Calling get() multiple times is inefficient
var street = employeeToStreet.get(employee);
var newEmployee = employeeToStreet.set(street.toUpperCase(), employee);
✅ Do this instead:
java
// Use modify() for transformations
var newEmployee = employeeToStreet.modify(String::toUpperCase, employee);
This level of abstraction enables you to write highly reusable and testable business logic that is completely decoupled from the details of state management, asynchrony, or error handling.
Making Optics Feel Natural in Java
While optics are powerful, their functional programming origins can make them feel foreign to Java developers. To bridge this gap, higher-kinded-j provides two complementary approaches for working with optics:
Fluent API for Optics
The Fluent API provides Java-friendly syntax for optic operations, offering both concise static methods and discoverable fluent builders:
// Static method style - concise
int age = OpticOps.get(person, PersonLenses.age());
// Fluent builder style - explicit and discoverable
int age = OpticOps.getting(person).through(PersonLenses.age());
This makes optics feel natural in Java whilst preserving all their functional power. Learn more in the Fluent API Guide.
Free Monad DSL for Optics
The Free Monad DSL separates program description from execution, enabling you to:
- Build optic programs as composable values
- Execute programs with different strategies (direct, logging, validation)
- Create audit trails for compliance
- Validate operations before applying them
// Build a program
Free<OpticOpKind.Witness, Person> program =
OpticPrograms.get(person, PersonLenses.age())
.flatMap(age ->
OpticPrograms.set(person, PersonLenses.age(), age + 1));
// Execute with different interpreters
Person result = OpticInterpreters.direct().run(program); // Production
LoggingOpticInterpreter logger = OpticInterpreters.logging();
logger.run(program); // Audit trail
ValidationOpticInterpreter.ValidationResult validation = OpticInterpreters.validating().validate(program); // Dry-run
This powerful pattern is explored in detail in the Free Monad DSL Guide and Optic Interpreters Guide.
Next: Lenses: Working with Product Types
Nested Updates with Lenses: A Practical Guide
Working with Product Types

- How to safely access and update fields in immutable data structures
- Using
@GenerateLensesto automatically create type-safe field accessors - Composing lenses to navigate deeply nested records
- The difference between
get,set, andmodifyoperations - Building reusable, composable data access patterns
- When to use lenses vs direct field access
In the introduction to optics, we saw how updating nested immutable data can be verbose and why optics provide a clean, functional solution. We identified the Lens as the primary tool for working with "has-a" relationships, like a field within a record.
This guide provides a complete, step-by-step walkthrough of how to solve the nested update problem using a composable Lens and its generated helper methods.
The Scenario: Updating an Employee's Address
Let's use a common business scenario involving a deeply nested data structure. Our goal is to update the street of an Employee's Company``Address.
The Data Model:
public record Address(String street, String city) {}
public record Company(String name, Address address) {}
public record Employee(String name, Company company) {}
Without optics, changing the street requires manually rebuilding the entire Employee object graph. With optics, we can define a direct path to the street and perform the update in a single, declarative line.
A Step-by-Step Walkthrough
Step 1: Generating the Lenses
Manually writing Lens implementations is tedious boilerplate. The hkj-optics library automates this with an annotation processor. To begin, we simply annotate our records with @GenerateLenses.
This process creates a companion class for each record (e.g., EmployeeLenses, CompanyLenses) that contains two key features:
- Lens Factories: Static methods that create a
Lensfor each field (e.g.,EmployeeLenses.company()). with*Helpers: Static convenience methods for easy, shallow updates (e.g.,EmployeeLenses.withCompany(...)).
import org.higherkindedj.optics.annotations.GenerateLenses;
@GenerateLenses
public record Address(String street, String city) {}
@GenerateLenses
public record Company(String name, Address address) {}
@GenerateLenses
public record Employee(String name, Company company) {}
Step 2: Composing a Deep Lens
With the lenses generated, we can now compose them using the andThen method. We'll chain the individual lenses together to create a single, new Lens that represents the complete path from the top-level object (Employee) to the deeply nested field (street).
The result is a new, powerful, and reusable Lens<Employee, String>.
// Get the generated lenses
Lens<Employee, Company> employeeToCompany = EmployeeLenses.company();
Lens<Company, Address> companyToAddress = CompanyLenses.address();
Lens<Address, String> addressToStreet = AddressLenses.street();
// Compose them to create a single, deep lens
Lens<Employee, String> employeeToStreet =
employeeToCompany
.andThen(companyToAddress)
.andThen(addressToStreet);
Step 3: Performing Updates with the Composed Lens
With our optics generated, we have two primary ways to perform updates.
A) Simple, Shallow Updates with with* Helpers
For simple updates to a top-level field, the generated with* methods are the most convenient and readable option.
// Create an employee instance
var employee = new Employee("Alice", ...);
// Use the generated helper to create an updated copy
var updatedEmployee = EmployeeLenses.withName(employee, "Bob");
This is a cleaner, more discoverable alternative to using the lens directly (EmployeeLenses.name().set("Bob", employee)).
B) Deep Updates with a Composed Lens
For deep updates into nested structures, the composed lens is the perfect tool. The Lens interface provides two primary methods for this:
set(newValue, object): Replaces the focused value with a new one.modify(function, object): Applies a function to the focused value to compute the new value.
Both methods handle the "copy-and-update" cascade for you, returning a completely new top-level object.
// Use the composed lens from Step 2
Employee updatedEmployee = employeeToStreet.set("456 Main St", initialEmployee);
When to Use with* Helpers vs Manual Lenses
Understanding when to use each approach will help you write cleaner, more maintainable code:
Use with* Helpers When:
- Simple, top-level field updates - Direct field replacement on the immediate object
- One-off updates - You don't need to reuse the update logic
- API clarity - You want the most discoverable, IDE-friendly approach
// Perfect for simple updates
var promotedEmployee = EmployeeLenses.withName(employee, "Senior " + employee.name());
Use Composed Lenses When:
- Deep updates - Navigating multiple levels of nesting
- Reusable paths - The same update pattern will be used multiple times
- Complex transformations - Using
modify()with functions - Conditional updates - Part of larger optic compositions
// Ideal for reusable deep updates
Lens<Employee, String> streetLens = employeeToCompany
.andThen(companyToAddress)
.andThen(addressToStreet);
// Can be reused across your application
Employee moved = streetLens.set("New Office Street", employee);
Employee uppercased = streetLens.modify(String::toUpperCase, employee);
Use Manual Lens Creation When:
- Computed properties - The lens represents derived data
- Complex transformations - Custom getter/setter logic
- Legacy integration - Working with existing APIs
// For computed or derived properties
Lens<Employee, String> fullAddressLens = Lens.of(
emp -> emp.company().address().street() + ", " + emp.company().address().city(),
(emp, fullAddr) -> {
String[] parts = fullAddr.split(", ");
return employeeToCompany.andThen(companyToAddress).set(
new Address(parts[0], parts[1]), emp);
}
);
Common Pitfalls
❌ Don't Do This:
// Inefficient: Calling get() multiple times
var currentStreet = employeeToStreet.get(employee);
var newEmployee = employeeToStreet.set(currentStreet.toUpperCase(), employee);
// Verbose: Rebuilding lenses repeatedly
var street1 = EmployeeLenses.company().andThen(CompanyLenses.address()).andThen(AddressLenses.street()).get(emp1);
var street2 = EmployeeLenses.company().andThen(CompanyLenses.address()).andThen(AddressLenses.street()).get(emp2);
// Mixing approaches unnecessarily
var tempCompany = EmployeeLenses.company().get(employee);
var updatedCompany = CompanyLenses.withName(tempCompany, "New Company");
var finalEmployee = EmployeeLenses.withCompany(employee, updatedCompany);
✅ Do This Instead:
// Efficient: Use modify() for transformations
var newEmployee = employeeToStreet.modify(String::toUpperCase, employee);
// Reusable: Create the lens once, use many times
var streetLens = EmployeeLenses.company().andThen(CompanyLenses.address()).andThen(AddressLenses.street());
var street1 = streetLens.get(emp1);
var street2 = streetLens.get(emp2);
// Consistent: Use one approach for the entire update
var finalEmployee = EmployeeLenses.company()
.andThen(CompanyLenses.name())
.set("New Company", employee);
Performance Notes
Lenses are optimised for immutable updates:
- Memory efficient: Only creates new objects along the path that changes
- Reusable: Composed lenses can be stored and reused across your application
- Type-safe: All operations are checked at compile time
- Lazy: Operations are only performed when needed
Best Practice: For frequently used paths, create the composed lens once and store it as a static field:
public class EmployeeOptics {
public static final Lens<Employee, String> STREET =
EmployeeLenses.company()
.andThen(CompanyLenses.address())
.andThen(AddressLenses.street());
public static final Lens<Employee, String> COMPANY_NAME =
EmployeeLenses.company()
.andThen(CompanyLenses.name());
}
Complete, Runnable Example
The following standalone example puts all these steps together. You can run it to see the output and the immutability in action.
package org.higherkindedj.example.lens;
// Imports for the generated classes would be automatically resolved by your IDE
import org.higherkindedj.example.lens.LensUsageExampleLenses.AddressLenses;
import org.higherkindedj.example.lens.LensUsageExampleLenses.CompanyLenses;
import org.higherkindedj.example.lens.LensUsageExampleLenses.EmployeeLenses;
import org.higherkindedj.optics.Lens;
import org.higherkindedj.optics.annotations.GenerateLenses;
public class LensUsageExample {
// 1. Define a nested, immutable data model.
@GenerateLenses
public record Address(String street, String city) {}
@GenerateLenses
public record Company(String name, Address address) {}
@GenerateLenses
public record Employee(String name, Company company) {}
public static void main(String[] args) {
// 2. Create an initial, nested immutable object.
var initialAddress = new Address("123 Fake St", "Anytown");
var initialCompany = new Company("Initech Inc.", initialAddress);
var initialEmployee = new Employee("Alice", initialCompany);
System.out.println("Original Employee: " + initialEmployee);
System.out.println("------------------------------------------");
// --- SCENARIO 1: Simple update with a `with*` helper ---
System.out.println("--- Scenario 1: Using `with*` Helper ---");
var employeeWithNewName = EmployeeLenses.withName(initialEmployee, "Bob");
System.out.println("After `withName`: " + employeeWithNewName);
System.out.println("------------------------------------------");
// --- SCENARIO 2: Deep update with a composed Lens ---
System.out.println("--- Scenario 2: Using Composed Lens ---");
Lens<Employee, String> employeeToStreet =
EmployeeLenses.company()
.andThen(CompanyLenses.address())
.andThen(AddressLenses.street());
// Use `set` to replace a value
Employee updatedEmployeeSet = employeeToStreet.set("456 Main St", initialEmployee);
System.out.println("After deep `set`: " + updatedEmployeeSet);
// Use `modify` to apply a function
Employee updatedEmployeeModify = employeeToStreet.modify(String::toUpperCase, initialEmployee);
System.out.println("After deep `modify`: " + updatedEmployeeModify);
System.out.println("Original is unchanged: " + initialEmployee);
// --- SCENARIO 3: Demonstrating reusability ---
System.out.println("--- Scenario 3: Reusing Composed Lens ---");
var employee2 = new Employee("Charlie", new Company("Tech Corp", new Address("789 Oak Ave", "Tech City")));
// Same lens works on different employee instances
var bothUpdated = List.of(initialEmployee, employee2)
.stream()
.map(emp -> employeeToStreet.modify(street -> "Remote: " + street, emp))
.toList();
System.out.println("Batch updated: " + bothUpdated);
}
}
Expected Output:
Original Employee: Employee[name=Alice, company=Company[name=Initech Inc., address=Address[street=123 Fake St, city=Anytown]]]
------------------------------------------
--- Scenario 1: Using `with*` Helper ---
After `withName`: Employee[name=Bob, company=Company[name=Initech Inc., address=Address[street=123 Fake St, city=Anytown]]]
------------------------------------------
--- Scenario 2: Using Composed Lens ---
After deep `set`: Employee[name=Alice, company=Company[name=Initech Inc., address=Address[street=456 Main St, city=Anytown]]]
After deep `modify`: Employee[name=Alice, company=Company[name=Initech Inc., address=Address[street=123 FAKE ST, city=Anytown]]]
Original is unchanged: Employee[name=Alice, company=Company[name=Initech Inc., address=Address[street=123 Fake St, city=Anytown]]]
------------------------------------------
--- Scenario 3: Reusing Composed Lens ---
Batch updated: [Employee[name=Alice, company=Company[name=Initech Inc., address=Address[street=Remote: 123 Fake St, city=Anytown]]], Employee[name=Charlie, company=Company[name=Tech Corp, address=Address[street=Remote: 789 Oak Ave, city=Tech City]]]]
As you can see, the generated optics provide a clean, declarative, and type-safe API for working with immutable data, whether your updates are simple and shallow or complex and deep.
Beyond the Basics: Effectful Updates with modifyF
While set and modify are for simple, pure updates, the Lens interface also supports effectful operations through modifyF. This method allows you to perform updates within a context like an Optional, Validated, or CompletableFuture.
This means you can use the same employeeToStreet lens to perform a street name update that involves failable validation or an asynchronous API call, making your business logic incredibly reusable and robust.
// Example: Street validation that might fail
Function<String, Kind<ValidatedKind.Witness<String>, String>> validateStreet =
street -> street.length() > 0 && street.length() < 100
? VALIDATED.widen(Validated.valid(street))
: VALIDATED.widen(Validated.invalid("Street name must be between 1 and 100 characters"));
// Use the same lens with effectful validation
Kind<ValidatedKind.Witness<String>, Employee> result =
employeeToStreet.modifyF(validateStreet, employee, validatedApplicative);
Previous: An Introduction to Optics Next: Prisms: Working with Sum Types
Prisms: A Practical Guide
Working with Sum Types

- How to safely work with sum types and sealed interfaces
- Using
@GeneratePrismsto create type-safe variant accessors - The difference between
getOptionalandbuildoperations - Composing prisms with other optics for deep conditional access
- Handling optional data extraction without
instanceofchains - When to use prisms vs pattern matching vs traditional type checking
The previous guide demonstrated how a Lens gives us a powerful, composable way to work with "has-a" relationships—a field that is guaranteed to exist within a record.
But what happens when the data doesn't have a guaranteed structure? What if a value can be one of several different types? This is the domain of "is-a" relationships, or sum types, commonly modeled in Java using sealed interface or enum.
For this, we need a different kind of optic: the Prism.
The Scenario: Working with JSON-like Data
A Lens is like a sniper rifle, targeting a single, known field. A Prism is like a safe-cracker's tool; it attempts to open a specific "lock" (a particular type) and only succeeds if it has the right key.
Consider a common scenario: modelling a JSON structure. A value can be a string, a number, a boolean, or a nested object.
The Data Model: We can represent this with a sealed interface.
import org.higherkindedj.optics.annotations.GeneratePrisms;
import org.higherkindedj.optics.annotations.GenerateLenses;
import java.util.Map;
@GeneratePrisms // Generates Prisms for each case of the sealed interface
public sealed interface JsonValue {}
public record JsonString(String value) implements JsonValue {}
public record JsonNumber(double value) implements JsonValue {}
@GenerateLenses // We can still use Lenses on the product types within the sum type
public record JsonObject(Map<String, JsonValue> fields) implements JsonValue {}
Our Goal: We need to safely access and update the value of a JsonString that is deeply nested within another JsonObject. An instanceof and casting approach would be unsafe and verbose. A Lens won't work because a JsonValue might be a JsonNumber, not the JsonObject we expect.
Think of Prisms Like...
- A type-safe filter: Only "lets through" values that match a specific shape
- A safe cast: Like
instanceof+ cast, but functional and composable - A conditional lens: Works like a lens, but might return empty if the type doesn't match
- A pattern matcher: Focuses on one specific case of a sum type
A Step-by-Step Walkthrough
Step 1: Generating the Prisms
Just as with lenses, we annotate our sealed interface with @GeneratePrisms. This automatically creates a companion class (e.g., JsonValuePrisms) with a Prism for each permitted subtype.
// Generated automatically:
// JsonValuePrisms.jsonString() -> Prism<JsonValue, JsonString>
// JsonValuePrisms.jsonNumber() -> Prism<JsonValue, JsonNumber>
// JsonValuePrisms.jsonBoolean() -> Prism<JsonValue, JsonBoolean>
// JsonValuePrisms.jsonObject() -> Prism<JsonValue, JsonObject>
Step 2: The Core Prism Operations
A Prism is defined by two unique, failable operations:
getOptional(source): Attempts to focus on the target. It returns anOptionalwhich is non-empty only if thesourcematches the Prism's specific case. This is the safe alternative to aninstanceofcheck and cast.build(value): Constructs the top-level type from a part. This is the reverse operation, used to wrap a value back into its specific case (e.g., taking aStringand building aJsonString).
Prism<JsonValue, JsonString> jsonStringPrism = JsonValuePrisms.jsonString();
// --- Using getOptional (the safe "cast") ---
Optional<JsonString> result1 = jsonStringPrism.getOptional(new JsonString("hello"));
// -> Optional.of(JsonString("hello"))
Optional<JsonString> result2 = jsonStringPrism.getOptional(new JsonNumber(123));
// -> Optional.empty()
// --- Using build (construct the sum type from a part) ---
JsonValue result3 = jsonStringPrism.build(new JsonString("world"));
// -> JsonString("world") (as JsonValue)
Step 3: Composing Prisms for Deep Access
The true power is composing Prisms with other optics. When a composition might fail (any time a Prism is involved), the result is a Traversal. To ensure type-safety during composition, we convert each optic in the chain to a Traversal using .asTraversal().
// Create all the optics we need
Prism<JsonValue, JsonObject> jsonObjectPrism = JsonValuePrisms.jsonObject();
Prism<JsonValue, JsonString> jsonStringPrism = JsonValuePrisms.jsonString();
Lens<JsonObject, Map<String, JsonValue>> fieldsLens = JsonObjectLenses.fields();
// The composed optic: safely navigate from JsonObject -> userLogin field -> name field -> string value
Traversal<JsonObject, String> userNameTraversal =
fieldsLens.asTraversal() // JsonObject -> Map<String, JsonValue>
.andThen(mapValue("userLogin")) // -> JsonValue (if "userLogin" key exists)
.andThen(jsonObjectPrism.asTraversal()) // -> JsonObject (if it's an object)
.andThen(fieldsLens.asTraversal()) // -> Map<String, JsonValue>
.andThen(Traversals.forMap("name")) // -> JsonValue (if "name" key exists)
.andThen(jsonStringPrism.asTraversal()) // -> JsonString (if it's a string)
.andThen(JsonStringLenses.value().asTraversal()); // -> String
This composed Traversal now represents a safe, deep path that will only succeed if every step in the chain matches.
When to Use Prisms vs Other Approaches
Use Prisms When:
- Type-safe variant handling - Working with
sealed interfaceorenumcases - Optional data extraction - You need to safely "try" to get a specific type
- Composable type checking - Building reusable type-safe paths
- Functional pattern matching - Avoiding
instanceofchains
// Perfect for safe type extraction
Optional<String> errorMessage = DomainErrorPrisms.validationError()
.andThen(ValidationErrorLenses.message())
.getOptional(someError);
Use Traditional instanceof When:
- One-off type checks - Not building reusable logic
- Imperative control flow - You need if/else branching
- Performance critical paths - Minimal abstraction overhead needed
// Sometimes instanceof is clearer for simple cases
if (jsonValue instanceof JsonString jsonStr) {
return jsonStr.value().toUpperCase();
}
Use Pattern Matching When:
- Exhaustive case handling - You need to handle all variants
- Complex extraction logic - Multiple levels of pattern matching
- Modern codebases - Using recent Java features
// Pattern matching for comprehensive handling
return switch (jsonValue) {
case JsonString(var str) -> str.toUpperCase();
case JsonNumber(var num) -> String.valueOf(num);
case JsonBoolean(var bool) -> String.valueOf(bool);
case JsonObject(var fields) -> "Object with " + fields.size() + " fields";
};
Common Pitfalls
❌ Don't Do This:
// Unsafe: Assuming the cast will succeed
JsonString jsonStr = (JsonString) jsonValue; // Can throw ClassCastException!
// Verbose: Repeated instanceof checks
if (jsonValue instanceof JsonObject obj1) {
var userValue = obj1.fields().get("userLogin");
if (userValue instanceof JsonObject obj2) {
var nameValue = obj2.fields().get("name");
if (nameValue instanceof JsonString str) {
return str.value().toUpperCase();
}
}
}
// Inefficient: Creating prisms repeatedly
var name1 = JsonValuePrisms.jsonString().getOptional(value1);
var name2 = JsonValuePrisms.jsonString().getOptional(value2);
var name3 = JsonValuePrisms.jsonString().getOptional(value3);
✅ Do This Instead:
// Safe: Use prism's getOptional
Optional<JsonString> maybeJsonStr = JsonValuePrisms.jsonString().getOptional(jsonValue);
// Composable: Build reusable safe paths
var userNamePath = JsonValuePrisms.jsonObject()
.andThen(JsonObjectLenses.fields())
.andThen(mapValue("userLogin"))
.andThen(JsonValuePrisms.jsonObject())
// ... continue composition
// Efficient: Reuse prisms and composed paths
var stringPrism = JsonValuePrisms.jsonString();
var name1 = stringPrism.getOptional(value1);
var name2 = stringPrism.getOptional(value2);
var name3 = stringPrism.getOptional(value3);
Performance Notes
Prisms are optimised for type safety and composability:
- Fast type checking: Prisms use
instanceofunder the hood, which is optimised by the JVM - Lazy evaluation: Composed prisms only perform checks when needed
- Memory efficient: No boxing or wrapper allocation for failed matches
- Composable: Complex type-safe paths can be built once and reused
Best Practice: For frequently used prism combinations, create them once and store as constants:
public class JsonOptics {
public static final Prism<JsonValue, JsonString> STRING =
JsonValuePrisms.jsonString();
public static final Traversal<JsonValue, String> STRING_VALUE =
STRING.andThen(JsonStringLenses.value());
public static final Traversal<JsonObject, String> USER_NAME =
fieldsLens.asTraversal()
.andThen(Traversals.forMap("userLogin"))
.andThen(JsonValuePrisms.jsonObject().asTraversal())
.andThen(fieldsLens.asTraversal())
.andThen(Traversals.forMap("name"))
.andThen(STRING.asTraversal())
.andThen(JsonStringLenses.value().asTraversal());
}
Real-World Example: API Response Handling
Here's a practical example of using prisms to handle different API response types safely:
@GeneratePrisms
public sealed interface ApiResponse {}
public record SuccessResponse(String data, int statusCode) implements ApiResponse {}
public record ErrorResponse(String message, String errorCode) implements ApiResponse {}
public record TimeoutResponse(long timeoutMs) implements ApiResponse {}
public class ApiHandler {
// Reusable prisms for different response types
private static final Prism<ApiResponse, SuccessResponse> SUCCESS =
ApiResponsePrisms.successResponse();
private static final Prism<ApiResponse, ErrorResponse> ERROR =
ApiResponsePrisms.errorResponse();
private static final Prism<ApiResponse, TimeoutResponse> TIMEOUT =
ApiResponsePrisms.timeoutResponse();
public String handleResponse(ApiResponse response) {
// Type-safe extraction and handling
return SUCCESS.getOptional(response)
.map(success -> "Success: " + success.data())
.or(() -> ERROR.getOptional(response)
.map(error -> "Error " + error.errorCode() + ": " + error.message()))
.or(() -> TIMEOUT.getOptional(response)
.map(timeout -> "Request timed out after " + timeout.timeoutMs() + "ms"))
.orElse("Unknown response type");
}
// Use prisms for conditional processing
public boolean isRetryable(ApiResponse response) {
return ERROR.getOptional(response)
.map(error -> "RATE_LIMIT".equals(error.errorCode()) || "TEMPORARY".equals(error.errorCode()))
.or(() -> TIMEOUT.getOptional(response).map(t -> true))
.orElse(false);
}
}
Complete, Runnable Example
This example puts it all together, showing how to use the composed Traversal to perform a safe update.
package org.higherkindedj.example.prism;
import org.higherkindedj.optics.Lens;
import org.higherkindedj.optics.Prism;
import org.higherkindedj.optics.Traversal;
import org.higherkindedj.optics.annotations.GenerateLenses;
import org.higherkindedj.optics.annotations.GeneratePrisms;
import org.higherkindedj.optics.util.Traversals;
import java.util.*;
public class PrismUsageExample {
// 1. Define the nested data model with sum types.
@GeneratePrisms
public sealed interface JsonValue {}
public record JsonString(String value) implements JsonValue {}
public record JsonNumber(double value) implements JsonValue {}
@GenerateLenses
public record JsonObject(Map<String, JsonValue> fields) implements JsonValue {}
public static void main(String[] args) {
// 2. Create the initial nested structure.
var userData = Map.of(
"userLogin", new JsonObject(Map.of(
"name", new JsonString("Alice"),
"age", new JsonNumber(30),
"active", new JsonBoolean(true)
)),
"metadata", new JsonObject(Map.of(
"version", new JsonString("1.0")
))
);
var data = new JsonObject(userData);
System.out.println("Original Data: " + data);
System.out.println("------------------------------------------");
// 3. Get the generated and manually created optics.
Prism<JsonValue, JsonObject> jsonObjectPrism = JsonValuePrisms.jsonObject();
Prism<JsonValue, JsonString> jsonStringPrism = JsonValuePrisms.jsonString();
Lens<JsonObject, Map<String, JsonValue>> fieldsLens = JsonObjectLenses.fields();
Lens<JsonString, String> jsonStringValueLens = Lens.of(JsonString::value, (js, s) -> new JsonString(s));
// 4. Demonstrate individual prism operations
System.out.println("--- Individual Prism Operations ---");
// Safe type extraction
JsonValue userValue = data.fields().get("userLogin");
Optional<JsonObject> userObject = jsonObjectPrism.getOptional(userValue);
System.out.println("User object: " + userObject);
// Attempting to extract wrong type
JsonValue nameValue = ((JsonObject) userValue).fields().get("name");
Optional<JsonNumber> nameAsNumber = JsonValuePrisms.jsonNumber().getOptional(nameValue);
System.out.println("Name as number (should be empty): " + nameAsNumber);
// Building new values
JsonValue newString = jsonStringPrism.build(new JsonString("Bob"));
System.out.println("Built new string: " + newString);
System.out.println("------------------------------------------");
// 5. Compose the full traversal.
Traversal<JsonObject, String> userToJsonName =
fieldsLens.asTraversal()
.andThen(Traversals.forMap("userLogin"))
.andThen(jsonObjectPrism.asTraversal())
.andThen(fieldsLens.asTraversal())
.andThen(Traversals.forMap("name"))
.andThen(jsonStringPrism.asTraversal())
.andThen(jsonStringValueLens.asTraversal());
// 6. Use the composed traversal to perform safe updates
JsonObject updatedData = Traversals.modify(userNameTraversal, String::toUpperCase, data);
System.out.println("After safe `modify`: " + updatedData);
// 7. Demonstrate that the traversal safely handles missing paths
var dataWithoutUser = new JsonObject(Map.of("metadata", new JsonString("test")));
JsonObject safeUpdate = Traversals.modify(userNameTraversal, String::toUpperCase, dataWithoutUser);
System.out.println("Safe update on missing path: " + safeUpdate);
System.out.println("Original is unchanged: " + data);
System.out.println("------------------------------------------");
// 8. Demonstrate error-resistant operations
System.out.println("--- Error-Resistant Operations ---");
// Get all string values safely
List<String> allStrings = List.of(
new JsonString("hello"),
new JsonNumber(42),
new JsonString("world"),
new JsonBoolean(true)
).stream()
.map(jsonStringPrism::getOptional)
.filter(Optional::isPresent)
.map(Optional::get)
.map(JsonString::value)
.toList();
System.out.println("Extracted strings only: " + allStrings);
}
}
Expected Output:
Original Data: JsonObject[fields={userLogin=JsonObject[fields={name=JsonString[value=Alice], age=JsonNumber[value=30.0], active=JsonBoolean[value=true]}], metadata=JsonObject[fields={version=JsonString[value=1.0]}]}]
------------------------------------------
--- Individual Prism Operations ---
User object: Optional[JsonObject[fields={name=JsonString[value=Alice], age=JsonNumber[value=30.0], active=JsonBoolean[value=true]}]]
Name as number (should be empty): Optional.empty
Built new string: JsonString[value=Bob]
------------------------------------------
--- Composed Traversal Operations ---
After safe `modify`: JsonObject[fields={userLogin=JsonObject[fields={name=JsonString[value=ALICE], age=JsonNumber[value=30.0], active=JsonBoolean[value=true]}], metadata=JsonObject[fields={version=JsonString[value=1.0]}]}]
Safe update on missing path: JsonObject[fields={metadata=JsonString[value=test]}]
Original is unchanged: JsonObject[fields={userLogin=JsonObject[fields={name=JsonString[value=Alice], age=JsonNumber[value=30.0], active=JsonBoolean[value=true]}], metadata=JsonObject[fields={version=JsonString[value=1.0]}]}]
------------------------------------------
--- Error-Resistant Operations ---
Extracted strings only: [hello, world]
Prism Convenience Methods
Streamlined Operations for Common Patterns
Whilst getOptional() and build() are the core operations, the Prism interface provides several convenience methods that make everyday tasks more ergonomic and expressive.
Quick Reference:
| Method | Purpose | Returns |
|---|---|---|
matches(S source) | Check if prism matches without extraction | boolean |
getOrElse(A default, S source) | Extract value or return default | A |
mapOptional(Function<A, B> f, S source) | Transform matched value | Optional<B> |
modify(Function<A, A> f, S source) | Modify if matches, else return original | S |
modifyWhen(Predicate<A> p, Function<A, A> f, S source) | Modify only when predicate satisfied | S |
setWhen(Predicate<A> p, A value, S source) | Set only when predicate satisfied | S |
orElse(Prism<S, A> other) | Try this prism, then fallback | Prism<S, A> |
Type Checking with matches()
The matches() method provides a clean alternative to getOptional(source).isPresent():
Prism<JsonValue, JsonString> stringPrism = JsonValuePrisms.jsonString();
// Clear, declarative type checking
if (stringPrism.matches(value)) {
// Process as string
}
// Useful in streams and filters
List<JsonValue> onlyStrings = values.stream()
.filter(stringPrism::matches)
.collect(Collectors.toList());
Real-World Example: Filtering polymorphic domain events:
@GeneratePrisms
sealed interface DomainEvent permits UserEvent, OrderEvent, PaymentEvent {}
// Business logic: process only payment events
public void processPayments(List<DomainEvent> events) {
Prism<DomainEvent, PaymentEvent> paymentPrism =
DomainEventPrisms.paymentEvent();
long paymentCount = events.stream()
.filter(paymentPrism::matches)
.count();
logger.info("Processing {} payment events", paymentCount);
events.stream()
.filter(paymentPrism::matches)
.map(paymentPrism::getOptional)
.flatMap(Optional::stream)
.forEach(this::processPayment);
}
Default Values with getOrElse()
When you need fallback values, getOrElse() is more concise than getOptional().orElse():
Prism<ApiResponse, SuccessResponse> successPrism =
ApiResponsePrisms.successResponse();
// Extract success data or use default
String data = successPrism.getOrElse(
new SuccessResponse("fallback", 200),
response
).data();
// Particularly useful for configuration
Config config = Prisms.some()
.getOrElse(Config.DEFAULT, optionalConfig);
Real-World Example: Parsing user input with graceful degradation:
@GeneratePrisms
sealed interface ParsedValue permits IntValue, StringValue, InvalidValue {}
public int parseUserQuantity(String input, int defaultQty) {
ParsedValue parsed = parseInput(input);
Prism<ParsedValue, IntValue> intPrism = ParsedValuePrisms.intValue();
// Extract integer or use sensible default
return intPrism.getOrElse(
new IntValue(defaultQty),
parsed
).value();
}
// Application settings with fallback
public DatabaseConfig getDatabaseConfig(ApplicationConfig config) {
Prism<ConfigSource, DatabaseConfig> dbConfigPrism =
ConfigSourcePrisms.databaseConfig();
return dbConfigPrism.getOrElse(
DatabaseConfig.DEFAULT_POSTGRES,
config.source()
);
}
Transforming Matches with mapOptional()
The mapOptional() method transforms matched values without building them back into the source type:
Prism<JsonValue, JsonNumber> numberPrism = JsonValuePrisms.jsonNumber();
// Extract and transform in one operation
Optional<String> formatted = numberPrism.mapOptional(
num -> String.format("%.2f", num.value()),
jsonValue
);
// Compose transformations
Optional<Boolean> isLarge = numberPrism.mapOptional(
num -> num.value() > 1000,
jsonValue
);
Real-World Example: ETL data transformation pipeline:
@GeneratePrisms
sealed interface SourceData permits CsvRow, JsonObject, XmlNode {}
public List<CustomerRecord> extractCustomers(List<SourceData> sources) {
Prism<SourceData, CsvRow> csvPrism = SourceDataPrisms.csvRow();
return sources.stream()
.map(source -> csvPrism.mapOptional(
csv -> new CustomerRecord(
csv.column("customer_id"),
csv.column("name"),
csv.column("email")
),
source
))
.flatMap(Optional::stream)
.collect(Collectors.toList());
}
// Extract business metrics from polymorphic events
public Optional<BigDecimal> extractRevenue(DomainEvent event) {
Prism<DomainEvent, OrderCompleted> orderPrism =
DomainEventPrisms.orderCompleted();
return orderPrism.mapOptional(
order -> order.lineItems().stream()
.map(LineItem::totalPrice)
.reduce(BigDecimal.ZERO, BigDecimal::add),
event
);
}
Simple Modifications with modify()
Instead of manually calling getOptional().map(f).map(build), use modify():
Prism<JsonValue, JsonString> stringPrism = JsonValuePrisms.jsonString();
// ✅ Clean modification
JsonValue uppercased = stringPrism.modify(
str -> new JsonString(str.value().toUpperCase()),
jsonValue
);
// ❌ Verbose alternative
JsonValue verboseResult = stringPrism.getOptional(jsonValue)
.map(str -> new JsonString(str.value().toUpperCase()))
.map(stringPrism::build)
.orElse(jsonValue);
If the prism doesn't match, modify() safely returns the original structure unchanged.
Conditional Operations with modifyWhen() and setWhen()
These methods combine matching with predicate-based filtering:
Prism<ConfigValue, StringConfig> stringConfig =
ConfigValuePrisms.stringConfig();
// Only modify non-empty strings
ConfigValue sanitised = stringConfig.modifyWhen(
str -> !str.value().isEmpty(),
str -> new StringConfig(str.value().trim()),
configValue
);
// Only update if validation passes
ConfigValue validated = stringConfig.setWhen(
str -> str.value().length() <= 255,
new StringConfig("validated"),
configValue
);
Real-World Example: Business rule enforcement in order processing:
@GeneratePrisms
sealed interface OrderStatus permits Draft, Submitted, Approved, Rejected {}
public class OrderProcessor {
private static final Prism<OrderStatus, Submitted> SUBMITTED =
OrderStatusPrisms.submitted();
// Only approve orders above minimum value
public OrderStatus approveIfEligible(
OrderStatus status,
BigDecimal orderValue,
BigDecimal minValue
) {
return SUBMITTED.setWhen(
submitted -> orderValue.compareTo(minValue) >= 0,
new Approved(Instant.now(), "AUTO_APPROVED"),
status
);
}
// Apply discount only to high-value draft orders
public OrderStatus applyVipDiscount(OrderStatus status, Order order) {
Prism<OrderStatus, Draft> draftPrism = OrderStatusPrisms.draft();
return draftPrism.modifyWhen(
draft -> order.totalValue().compareTo(VIP_THRESHOLD) > 0,
draft -> draft.withDiscount(VIP_DISCOUNT_RATE),
status
);
}
}
Use Cases:
- Conditional validation: Update only if current value meets criteria
- Guarded transformations: Apply changes only to valid states
- Business rules: Enforce constraints during updates
- Workflow automation: Apply state transitions based on business logic
Fallback Matching with orElse()
The orElse() method chains prisms to try multiple matches:
Prism<JsonValue, JsonNumber> intPrism = JsonValuePrisms.jsonInt();
Prism<JsonValue, JsonNumber> doublePrism = JsonValuePrisms.jsonDouble();
// Try int first, fall back to double
Prism<JsonValue, JsonNumber> anyNumber = intPrism.orElse(doublePrism);
Optional<JsonNumber> result = anyNumber.getOptional(jsonValue);
// Matches either integer or double JSON values
// Building always uses the first prism's constructor
JsonValue built = anyNumber.build(new JsonNumber(42)); // Uses intPrism.build
Real-World Example: Handling multiple error types in API responses:
Prism<ApiResponse, String> errorMessage =
ApiResponsePrisms.validationError()
.andThen(ValidationErrorLenses.message())
.orElse(
ApiResponsePrisms.serverError()
.andThen(ServerErrorLenses.message())
);
// Extracts error message from either error type
Optional<String> message = errorMessage.getOptional(response);
- matches(): Type guards, stream filters, conditional logic
- getOrElse(): Configuration, default values, fallback data
- mapOptional(): Projections, transformations without reconstruction
- modify(): Simple transformations of matching cases
- modifyWhen(): Conditional updates based on current state
- setWhen(): Guarded updates with validation
- orElse(): Handling multiple variants, fallback strategies
Common Prism Patterns with the Prisms Utility
Ready-Made Prisms for Standard Types
The Prisms utility class (in org.higherkindedj.optics.util) provides factory methods for common prism patterns, saving you from writing boilerplate for standard Java types.
Quick Reference:
| Factory Method | Type Signature | Use Case |
|---|---|---|
some() | Prism<Optional<A>, A> | Extract present Optional values |
left() | Prism<Either<L, R>, L> | Focus on Left case |
right() | Prism<Either<L, R>, R> | Focus on Right case |
only(A value) | Prism<A, Unit> | Match specific value |
notNull() | Prism<@Nullable A, A> | Filter null values |
instanceOf(Class<A>) | Prism<S, A> | Safe type-based casting |
listHead() | Prism<List<A>, A> | First element (if exists) |
listLast() | Prism<List<A>, A> | Last element (if exists) |
listAt(int) | Prism<List<A>, A> | Element at index (read-only) |
Working with Optional: Prisms.some()
import org.higherkindedj.optics.util.Prisms;
Prism<Optional<String>, String> somePrism = Prisms.some();
Optional<String> present = Optional.of("hello");
Optional<String> value = somePrism.getOptional(present); // Optional.of("hello")
Optional<String> empty = Optional.empty();
Optional<String> noMatch = somePrism.getOptional(empty); // Optional.empty()
// Useful for nested Optionals
Optional<Optional<Config>> nestedConfig = loadConfig();
Optional<Config> flattened = somePrism.getOptional(nestedConfig)
.flatMap(Function.identity());
Either Case Handling: Prisms.left() and Prisms.right()
Prism<Either<String, Integer>, String> leftPrism = Prisms.left();
Prism<Either<String, Integer>, Integer> rightPrism = Prisms.right();
Either<String, Integer> error = Either.left("Failed");
Optional<String> errorMsg = leftPrism.getOptional(error); // Optional.of("Failed")
Optional<Integer> noValue = rightPrism.getOptional(error); // Optional.empty()
// Compose with lenses for deep access
record ValidationError(String code, String message) {}
Lens<ValidationError, String> messageLens = ValidationErrorLenses.message();
Prism<Either<ValidationError, Data>, String> errorMessage =
Prisms.<ValidationError, Data>left()
.andThen(messageLens);
Either<ValidationError, Data> result = validate(data);
Optional<String> msg = errorMessage.getOptional(result);
Sentinel Values: Prisms.only()
Perfect for matching specific constant values:
Prism<String, Unit> httpOkPrism = Prisms.only("200 OK");
// Check for specific status
if (httpOkPrism.matches(statusCode)) {
// Handle success case
}
// Filter for specific values
List<String> onlyErrors = statusCodes.stream()
.filter(Prisms.only("500 ERROR")::matches)
.collect(Collectors.toList());
// Null sentinel handling
Prism<String, Unit> nullPrism = Prisms.only(null);
boolean isNull = nullPrism.matches(value);
Null Safety: Prisms.notNull()
Prism<String, String> notNullPrism = Prisms.notNull();
// Safe extraction
@Nullable String nullable = getDatabaseValue();
Optional<String> safe = notNullPrism.getOptional(nullable);
// Compose to filter null values in pipelines
Traversal<List<String>, String> nonNullStrings =
Traversals.<String>forList()
.andThen(Prisms.<String>notNull().asTraversal());
List<@Nullable String> mixedList = List.of("hello", null, "world", null);
List<String> filtered = Traversals.getAll(nonNullStrings, mixedList);
// Result: ["hello", "world"]
Type-Safe Casting: Prisms.instanceOf()
Elegant alternative to instanceof checks in type hierarchies:
sealed interface Animal permits Dog, Cat, Bird {}
record Dog(String name, String breed) implements Animal {}
record Cat(String name, int lives) implements Animal {}
record Bird(String name, boolean canFly) implements Animal {}
Prism<Animal, Dog> dogPrism = Prisms.instanceOf(Dog.class);
Animal animal = new Dog("Buddy", "Labrador");
Optional<Dog> maybeDog = dogPrism.getOptional(animal); // Optional.of(Dog(...))
// Compose with lenses for deep access
Lens<Dog, String> breedLens = DogLenses.breed();
Traversal<Animal, String> dogBreed =
dogPrism.asTraversal().andThen(breedLens.asTraversal());
List<Animal> animals = List.of(
new Dog("Rex", "German Shepherd"),
new Cat("Whiskers", 9),
new Dog("Max", "Beagle")
);
List<String> breeds = Traversals.getAll(
Traversals.<Animal>forList().andThen(dogBreed),
animals
);
// Result: ["German Shepherd", "Beagle"]
Collection Element Access
// First element (if list is non-empty)
Prism<List<String>, String> headPrism = Prisms.listHead();
List<String> names = List.of("Alice", "Bob", "Charlie");
Optional<String> first = headPrism.getOptional(names); // Optional.of("Alice")
// Last element
Prism<List<String>, String> lastPrism = Prisms.listLast();
Optional<String> last = lastPrism.getOptional(names); // Optional.of("Charlie")
// Element at specific index (read-only for queries)
Prism<List<String>, String> secondPrism = Prisms.listAt(1);
Optional<String> second = secondPrism.getOptional(names); // Optional.of("Bob")
// Safe access patterns
String firstOrDefault = headPrism.getOrElse("Unknown", names);
boolean hasList = headPrism.matches(names);
The listHead() and listLast() prisms have limited build() operations—they create singleton lists. The listAt(int) prism throws UnsupportedOperationException on build() since there's no meaningful way to construct a complete list from a single indexed element.
Use these prisms for:
- Safe element extraction
- Conditional checks (with
matches()) - Query operations (with
getOptional())
For list modification, use Traversal or Lens instead:
// ✅ For modifications, use proper traversals
Lens<List<String>, String> firstLens = listFirstElementLens();
List<String> updated = firstLens.modify(String::toUpperCase, names);
Composing Utility Prisms
The real power emerges when composing these utility prisms with your domain optics:
record Config(Optional<Either<String, DatabaseSettings>> database) {}
record DatabaseSettings(String host, int port) {}
// Build a path through Optional -> Either -> Settings -> host
Prism<Config, String> databaseHost =
ConfigLenses.database() // Lens<Config, Optional<Either<...>>>
.asTraversal()
.andThen(Prisms.some().asTraversal()) // -> Either<String, DatabaseSettings>
.andThen(Prisms.right().asTraversal()) // -> DatabaseSettings
.andThen(DatabaseSettingsLenses.host().asTraversal()); // -> String
Config config = loadConfig();
Optional<String> host = Traversals.getAll(databaseHost, config)
.stream().findFirst();
Utility prisms are lightweight and stateless—they're safe to create on-demand or cache as constants:
public class AppPrisms {
public static final Prism<Optional<User>, User> SOME_USER = Prisms.some();
public static final Prism<Response, SuccessResponse> SUCCESS =
Prisms.instanceOf(SuccessResponse.class);
}
Why Prisms are Essential
Lens handles the "what" and Prism handles the "what if." Together, they allow you to model and operate on virtually any immutable data structure you can design. Prisms are essential for:
- Safety: Eliminating
instanceofchecks and unsafe casts. - Clarity: Expressing failable focus in a clean, functional way.
- Composability: Combining checks for different data shapes into a single, reusable optic.
- Maintainability: Creating type-safe paths that won't break when data structures evolve.
By adding Prisms to your toolkit, you can write even more robust, declarative, and maintainable code that gracefully handles the complexity of real-world data structures.
Once you're comfortable with these prism fundamentals, explore Advanced Prism Patterns for production-ready patterns including:
- Configuration management with layered prism composition
- API response handling with type-safe error recovery
- Data validation pipelines and event processing systems
- State machine implementations and plugin architectures
- Performance optimisation and testing strategies
Further Reading
For deeper understanding of prisms and optics theory:
- Profunctor Optics: Modular Data Accessors - Academic foundation for modern optics
- The Essence of Functional Programming - Wadler's seminal paper on monads and functors
- Lens in Scala (Monocle) - Production-ready Scala optics library with extensive examples
- Haskell Lens Library - Canonical reference implementation
- A Little Lens Starter Tutorial - Beginner-friendly introduction to optics concepts
Previous: Lenses: Working with Product Types Next: Advanced Prism Patterns | Isomorphisms: Data Equivalence
Advanced Prism Patterns
Real-World Applications of Prisms in Production Systems

- Configuration management with layered prism composition
- API response handling with type-safe error recovery
- Data validation pipelines using prisms for conditional processing
- Event processing systems with prism-based routing
- State machine implementations using prisms for transitions
- Plugin architectures with type-safe variant handling
- Performance optimisation patterns for production systems
- Testing strategies for prism-heavy codebases
This guide explores sophisticated prism patterns encountered in production Java applications. We'll move beyond basic type matching to examine how prisms enable elegant solutions to complex architectural problems.
This guide assumes familiarity with prism fundamentals including getOptional(), build(), convenience methods (matches(), modify(), modifyWhen(), etc.), and the Prisms utility class. If you're new to prisms, start with Prisms: A Practical Guide which covers:
- Core prism operations and type-safe variant handling
- The 7 convenience methods for streamlined operations
- The
Prismsutility class for common patterns - Composition with lenses and traversals
Pattern 1: Configuration Management
Type-Safe, Layered Configuration Resolution
Configuration systems often deal with multiple sources (environment variables, files, defaults) and various data types. Prisms provide a type-safe way to navigate this complexity.
The Challenge
// Traditional approach: brittle and verbose
Object rawValue = config.get("database.connection.pool.size");
if (rawValue instanceof Integer i) {
return i > 0 ? i : DEFAULT_POOL_SIZE;
} else if (rawValue instanceof String s) {
try {
int parsed = Integer.parseInt(s);
return parsed > 0 ? parsed : DEFAULT_POOL_SIZE;
} catch (NumberFormatException e) {
return DEFAULT_POOL_SIZE;
}
}
return DEFAULT_POOL_SIZE;
The Prism Solution
@GeneratePrisms
sealed interface ConfigValue permits StringValue, IntValue, BoolValue, NestedConfig {}
record StringValue(String value) implements ConfigValue {}
record IntValue(int value) implements ConfigValue {}
record BoolValue(boolean value) implements ConfigValue {}
record NestedConfig(Map<String, ConfigValue> values) implements ConfigValue {}
public class ConfigResolver {
private static final Prism<ConfigValue, IntValue> INT =
ConfigValuePrisms.intValue();
private static final Prism<ConfigValue, StringValue> STRING =
ConfigValuePrisms.stringValue();
public static int getPoolSize(ConfigValue value) {
// Try integer first, fall back to parsing string
return INT.mapOptional(IntValue::value, value)
.filter(i -> i > 0)
.or(() -> STRING.mapOptional(StringValue::value, value)
.flatMap(ConfigResolver::safeParseInt)
.filter(i -> i > 0))
.orElse(DEFAULT_POOL_SIZE);
}
private static Optional<Integer> safeParseInt(String s) {
try {
return Optional.of(Integer.parseInt(s));
} catch (NumberFormatException e) {
return Optional.empty();
}
}
}
Nested Configuration Access
// Build a type-safe path through nested configuration
Prism<ConfigValue, NestedConfig> nested = ConfigValuePrisms.nestedConfig();
Lens<NestedConfig, Map<String, ConfigValue>> values = NestedConfigLenses.values();
Traversal<ConfigValue, ConfigValue> databaseConfig =
nested.asTraversal()
.andThen(values.asTraversal())
.andThen(Traversals.forMap("database"))
.andThen(nested.asTraversal())
.andThen(values.asTraversal())
.andThen(Traversals.forMap("connection"));
// Extract with fallback
ConfigValue rootConfig = loadConfiguration(); // Top-level configuration object
Optional<ConfigValue> connConfig = Traversals.getAll(databaseConfig, rootConfig)
.stream().findFirst();
- Cache composed prisms: Configuration paths don't change at runtime
- Use
orElse()chains: Handle type coercion gracefully - Validate at load time: Use
modifyWhen()to enforce constraints - Provide clear defaults: Always have fallback values
Pattern 2: API Response Handling
Type-Safe HTTP Response Processing
Modern APIs return varying response types based on status codes. Prisms provide elegant error handling and recovery strategies.
The Challenge
// Traditional approach: error-prone branching
if (response.status() == 200) {
return processSuccess((SuccessResponse) response);
} else if (response.status() == 400) {
ValidationError err = (ValidationError) response;
return handleValidation(err);
} else if (response.status() == 500) {
return handleServerError((ServerError) response);
} else if (response.status() == 429) {
return retryWithBackoff((RateLimitError) response);
}
// What about 401, 403, 404, ...?
The Prism Solution
@GeneratePrisms
@GenerateLenses
sealed interface ApiResponse permits Success, ValidationError, ServerError,
RateLimitError, AuthError, NotFoundError {}
record Success(JsonValue data, int statusCode) implements ApiResponse {}
record ValidationError(List<String> errors, String field) implements ApiResponse {}
record ServerError(String message, String traceId) implements ApiResponse {}
record RateLimitError(long retryAfterMs) implements ApiResponse {}
record AuthError(String realm) implements ApiResponse {}
record NotFoundError(String resource) implements ApiResponse {}
public class ApiHandler {
// Reusable prisms for each response type
private static final Prism<ApiResponse, Success> SUCCESS =
ApiResponsePrisms.success();
private static final Prism<ApiResponse, ValidationError> VALIDATION =
ApiResponsePrisms.validationError();
private static final Prism<ApiResponse, RateLimitError> RATE_LIMIT =
ApiResponsePrisms.rateLimitError();
private static final Prism<ApiResponse, ServerError> SERVER_ERROR =
ApiResponsePrisms.serverError();
public Either<String, JsonValue> handleResponse(ApiResponse response) {
// Try success first
return SUCCESS.mapOptional(Success::data, response)
.map(Either::<String, JsonValue>right)
// Then validation errors
.or(() -> VALIDATION.mapOptional(
err -> Either.<String, JsonValue>left(
"Validation failed: " + String.join(", ", err.errors())
),
response
))
// Then server errors
.or(() -> SERVER_ERROR.mapOptional(
err -> Either.<String, JsonValue>left(
"Server error: " + err.message() + " [" + err.traceId() + "]"
),
response
))
.orElse(Either.left("Unknown error type"));
}
public boolean isRetryable(ApiResponse response) {
return RATE_LIMIT.matches(response) || SERVER_ERROR.matches(response);
}
public Optional<Long> getRetryDelay(ApiResponse response) {
return RATE_LIMIT.mapOptional(RateLimitError::retryAfterMs, response);
}
}
Advanced: Response Pipeline with Fallbacks
public class ResilientApiClient {
public CompletableFuture<JsonValue> fetchWithFallbacks(String endpoint) {
return primaryApi.call(endpoint)
.thenCompose(response ->
SUCCESS.mapOptional(Success::data, response)
.map(CompletableFuture::completedFuture)
.or(() -> RATE_LIMIT.mapOptional(
err -> CompletableFuture.supplyAsync(
() -> callSecondaryApi(endpoint),
delayedExecutor(err.retryAfterMs(), TimeUnit.MILLISECONDS)
),
response
))
.orElseGet(() -> CompletableFuture.failedFuture(
new ApiException("Unrecoverable error")
))
);
}
}
When using prisms for API handling:
- Log unmatched cases: Track responses that don't match any prism
- Metrics: Count matches per prism type for monitoring
- Circuit breakers: Integrate retry logic with circuit breaker patterns
- Structured logging: Use
mapOptional()to extract error details
Pattern 3: Data Validation Pipelines
Composable, Type-Safe Validation Logic
Validation often requires checking different data types and applying conditional rules. Prisms make validation logic declarative and reusable.
The Challenge
ETL pipelines process heterogeneous data where validation rules depend on data types:
// Traditional approach: imperative branching
List<ValidationError> errors = new ArrayList<>();
for (Object value : row.values()) {
if (value instanceof String s) {
if (s.length() > MAX_STRING_LENGTH) {
errors.add(new ValidationError("String too long: " + s));
}
} else if (value instanceof Integer i) {
if (i < 0) {
errors.add(new ValidationError("Negative integer: " + i));
}
}
// ... more type checks
}
The Prism Solution
@GeneratePrisms
sealed interface DataValue permits StringData, IntData, DoubleData, NullData {}
record StringData(String value) implements DataValue {}
record IntData(int value) implements DataValue {}
record DoubleData(double value) implements DataValue {}
record NullData() implements DataValue {}
public class ValidationPipeline {
// Validation rules as prism transformations
private static final Prism<DataValue, StringData> STRING =
DataValuePrisms.stringData();
private static final Prism<DataValue, IntData> INT =
DataValuePrisms.intData();
public static List<String> validate(List<DataValue> row) {
return row.stream()
.flatMap(value -> Stream.concat(
// Validate strings
STRING.mapOptional(
s -> s.value().length() > MAX_STRING_LENGTH
? Optional.of("String too long: " + s.value())
: Optional.empty(),
value
).stream(),
// Validate integers
INT.mapOptional(
i -> i.value() < 0
? Optional.of("Negative integer: " + i.value())
: Optional.empty(),
value
).stream()
))
.collect(Collectors.toList());
}
// Sanitise data by modifying only invalid values
public static List<DataValue> sanitise(List<DataValue> row) {
return row.stream()
.map(value ->
// Truncate long strings
STRING.modifyWhen(
s -> s.value().length() > MAX_STRING_LENGTH,
s -> new StringData(s.value().substring(0, MAX_STRING_LENGTH)),
value
)
)
.map(value ->
// Clamp negative integers to zero
INT.modifyWhen(
i -> i.value() < 0,
i -> new IntData(0),
value
)
)
.collect(Collectors.toList());
}
}
Advanced: Validation with Accumulation
Using Either and prisms for validation that accumulates errors:
public class AccumulatingValidator {
public static Either<List<String>, List<DataValue>> validateAll(List<DataValue> row) {
List<String> errors = new ArrayList<>();
List<DataValue> sanitised = new ArrayList<>();
for (DataValue value : row) {
// Validate and potentially sanitise each value
DataValue processed = value;
// Check strings
processed = STRING.modifyWhen(
s -> s.value().length() > MAX_STRING_LENGTH,
s -> {
errors.add("Truncated: " + s.value());
return new StringData(s.value().substring(0, MAX_STRING_LENGTH));
},
processed
);
// Check integers
processed = INT.modifyWhen(
i -> i.value() < 0,
i -> {
errors.add("Clamped negative: " + i.value());
return new IntData(0);
},
processed
);
sanitised.add(processed);
}
return errors.isEmpty()
? Either.right(sanitised)
: Either.left(errors);
}
}
- Compose validators: Build complex validation from simple prism rules
- Use
modifyWhen()for sanitisation: Fix values whilst tracking changes - Accumulate errors: Don't fail-fast; collect all validation issues
- Type-specific rules: Let prisms dispatch to appropriate validators
Pattern 4: Event Processing
Type-Safe Event Routing and Handling
Event-driven systems receive heterogeneous event types that require different processing logic. Prisms provide type-safe routing without instanceof cascades.
The Challenge
// Traditional approach: brittle event dispatching
public void handleEvent(Event event) {
if (event instanceof UserCreated uc) {
sendWelcomeEmail(uc.userId(), uc.email());
provisionResources(uc.userId());
} else if (event instanceof UserDeleted ud) {
cleanupResources(ud.userId());
archiveData(ud.userId());
} else if (event instanceof OrderPlaced op) {
processPayment(op.orderId());
updateInventory(op.items());
}
// Grows with each new event type
}
The Prism Solution
@GeneratePrisms
@GenerateLenses
sealed interface DomainEvent permits UserCreated, UserDeleted, UserUpdated,
OrderPlaced, OrderCancelled, PaymentProcessed {}
record UserCreated(String userId, String email, Instant timestamp) implements DomainEvent {}
record UserDeleted(String userId, Instant timestamp) implements DomainEvent {}
record UserUpdated(String userId, Map<String, String> changes, Instant timestamp) implements DomainEvent {}
record OrderPlaced(String orderId, List<LineItem> items, Instant timestamp) implements DomainEvent {}
record OrderCancelled(String orderId, String reason, Instant timestamp) implements DomainEvent {}
record PaymentProcessed(String orderId, double amount, Instant timestamp) implements DomainEvent {}
public class EventRouter {
private static final Prism<DomainEvent, UserCreated> USER_CREATED =
DomainEventPrisms.userCreated();
private static final Prism<DomainEvent, UserDeleted> USER_DELETED =
DomainEventPrisms.userDeleted();
private static final Prism<DomainEvent, OrderPlaced> ORDER_PLACED =
DomainEventPrisms.orderPlaced();
// Declarative event handler registry
private final Map<Prism<DomainEvent, ?>, Consumer<DomainEvent>> handlers = Map.of(
USER_CREATED, event -> USER_CREATED.mapOptional(
uc -> {
sendWelcomeEmail(uc.userId(), uc.email());
provisionResources(uc.userId());
return uc;
},
event
),
USER_DELETED, event -> USER_DELETED.mapOptional(
ud -> {
cleanupResources(ud.userId());
archiveData(ud.userId());
return ud;
},
event
),
ORDER_PLACED, event -> ORDER_PLACED.mapOptional(
op -> {
processPayment(op.orderId());
updateInventory(op.items());
return op;
},
event
)
);
public void route(DomainEvent event) {
handlers.entrySet().stream()
.filter(entry -> entry.getKey().matches(event))
.findFirst()
.ifPresentOrElse(
entry -> entry.getValue().accept(event),
() -> log.warn("Unhandled event type: {}", event.getClass())
);
}
}
Advanced: Event Filtering and Transformation
public class EventProcessor {
// Process only recent user events
public List<DomainEvent> getRecentUserEvents(
List<DomainEvent> events,
Instant since
) {
Prism<DomainEvent, UserCreated> userCreated = USER_CREATED;
Prism<DomainEvent, UserDeleted> userDeleted = USER_DELETED;
return events.stream()
.filter(e ->
// Match user events with timestamp filter
userCreated.mapOptional(
uc -> uc.timestamp().isAfter(since) ? uc : null,
e
).isPresent()
||
userDeleted.mapOptional(
ud -> ud.timestamp().isAfter(since) ? ud : null,
e
).isPresent()
)
.collect(Collectors.toList());
}
// Transform events for audit log
public List<AuditEntry> toAuditLog(List<DomainEvent> events) {
return events.stream()
.flatMap(event ->
// Extract audit entries from different event types
USER_CREATED.mapOptional(
uc -> new AuditEntry("USER_CREATED", uc.userId(), uc.timestamp()),
event
).or(() ->
ORDER_PLACED.mapOptional(
op -> new AuditEntry("ORDER_PLACED", op.orderId(), op.timestamp()),
event
)
).stream()
)
.collect(Collectors.toList());
}
}
- Registry pattern: Map prisms to handlers for extensibility
- Metrics: Track event types processed using
matches() - Dead letter queue: Log events that match no prism
- Event sourcing: Use prisms to replay specific event types
Pattern 5: State Machines
Type-Safe State Transitions
State machines with complex transition rules benefit from prisms' ability to safely match states and transform between them.
The Challenge
// Traditional approach: verbose state management
public Order transition(Order order, OrderEvent event) {
if (order.state() instanceof Pending && event instanceof PaymentReceived) {
return order.withState(new Processing(((PaymentReceived) event).transactionId()));
} else if (order.state() instanceof Processing && event instanceof ShippingCompleted) {
return order.withState(new Shipped(((ShippingCompleted) event).trackingNumber()));
}
// Many more transitions...
throw new IllegalStateException("Invalid transition");
}
The Prism Solution
@GeneratePrisms
sealed interface OrderState permits Pending, Processing, Shipped, Delivered, Cancelled {}
record Pending(Instant createdAt) implements OrderState {}
record Processing(String transactionId, Instant startedAt) implements OrderState {}
record Shipped(String trackingNumber, Instant shippedAt) implements OrderState {}
record Delivered(Instant deliveredAt) implements OrderState {}
record Cancelled(String reason, Instant cancelledAt) implements OrderState {}
@GeneratePrisms
sealed interface OrderEvent permits PaymentReceived, ShippingCompleted,
DeliveryConfirmed, CancellationRequested {}
record PaymentReceived(String transactionId) implements OrderEvent {}
record ShippingCompleted(String trackingNumber) implements OrderEvent {}
record DeliveryConfirmed() implements OrderEvent {}
record CancellationRequested(String reason) implements OrderEvent {}
public class OrderStateMachine {
private static final Prism<OrderState, Pending> PENDING =
OrderStatePrisms.pending();
private static final Prism<OrderState, Processing> PROCESSING =
OrderStatePrisms.processing();
private static final Prism<OrderState, Shipped> SHIPPED =
OrderStatePrisms.shipped();
private static final Prism<OrderEvent, PaymentReceived> PAYMENT =
OrderEventPrisms.paymentReceived();
private static final Prism<OrderEvent, ShippingCompleted> SHIPPING =
OrderEventPrisms.shippingCompleted();
private static final Prism<OrderEvent, DeliveryConfirmed> DELIVERY =
OrderEventPrisms.deliveryConfirmed();
// Define valid transitions as prism combinations
public Optional<OrderState> transition(OrderState currentState, OrderEvent event) {
// Pending -> Processing (on payment)
if (PENDING.matches(currentState) && PAYMENT.matches(event)) {
return PAYMENT.mapOptional(
payment -> new Processing(payment.transactionId(), Instant.now()),
event
);
}
// Processing -> Shipped (on shipping)
if (PROCESSING.matches(currentState) && SHIPPING.matches(event)) {
return SHIPPING.mapOptional(
shipping -> new Shipped(shipping.trackingNumber(), Instant.now()),
event
);
}
// Shipped -> Delivered (on confirmation)
if (SHIPPED.matches(currentState) && DELIVERY.matches(event)) {
return Optional.of(new Delivered(Instant.now()));
}
return Optional.empty(); // Invalid transition
}
// Guard conditions using prisms
public boolean canCancel(OrderState state) {
// Can cancel if Pending or Processing
return PENDING.matches(state) || PROCESSING.matches(state);
}
// Extract state-specific data
public Optional<String> getTrackingNumber(OrderState state) {
return SHIPPED.mapOptional(Shipped::trackingNumber, state);
}
}
Advanced: Transition Table
import org.higherkindedj.optics.util.Pair; // Pair utility from hkj-optics
public class AdvancedStateMachine {
// Define transitions as a declarative table
private static final Map<
Pair<Prism<OrderState, ?>, Prism<OrderEvent, ?>>,
BiFunction<OrderState, OrderEvent, OrderState>
> TRANSITIONS = Map.of(
Pair.of(PENDING, PAYMENT),
(state, event) -> PAYMENT.mapOptional(
p -> new Processing(p.transactionId(), Instant.now()),
event
).orElse(state),
Pair.of(PROCESSING, SHIPPING),
(state, event) -> SHIPPING.mapOptional(
s -> new Shipped(s.trackingNumber(), Instant.now()),
event
).orElse(state)
);
public OrderState process(OrderState state, OrderEvent event) {
return TRANSITIONS.entrySet().stream()
.filter(entry ->
entry.getKey().first().matches(state) &&
entry.getKey().second().matches(event)
)
.findFirst()
.map(entry -> entry.getValue().apply(state, event))
.orElseThrow(() -> new IllegalStateException(
"Invalid transition: " + state + " -> " + event
));
}
}
- Exhaustive matching: Ensure all valid transitions are covered
- Guard conditions: Use
matches()for pre-condition checks - Immutability: States are immutable; transitions create new instances
- Audit trail: Log state transitions using prism metadata
Pattern 6: Plugin Systems
Type-Safe Plugin Discovery and Execution
Plugin architectures require dynamic dispatch to various plugin types whilst maintaining type safety.
The Challenge
// Traditional approach: reflection and casting
public void executePlugin(Plugin plugin, Object context) {
if (plugin.getClass().getName().equals("DatabasePlugin")) {
((DatabasePlugin) plugin).execute((DatabaseContext) context);
} else if (plugin.getClass().getName().equals("FileSystemPlugin")) {
((FileSystemPlugin) plugin).execute((FileSystemContext) context);
}
// Fragile and unsafe
}
The Prism Solution
@GeneratePrisms
sealed interface Plugin permits DatabasePlugin, FileSystemPlugin,
NetworkPlugin, ComputePlugin {}
record DatabasePlugin(String query, DatabaseConfig config) implements Plugin {
public Result execute(DatabaseContext ctx) {
return ctx.executeQuery(query, config);
}
}
record FileSystemPlugin(Path path, FileOperation operation) implements Plugin {
public Result execute(FileSystemContext ctx) {
return ctx.performOperation(path, operation);
}
}
record NetworkPlugin(URL endpoint, HttpMethod method) implements Plugin {
public Result execute(NetworkContext ctx) {
return ctx.makeRequest(endpoint, method);
}
}
record ComputePlugin(String script, Runtime runtime) implements Plugin {
public Result execute(ComputeContext ctx) {
return ctx.runScript(script, runtime);
}
}
public class PluginExecutor {
private static final Prism<Plugin, DatabasePlugin> DB =
PluginPrisms.databasePlugin();
private static final Prism<Plugin, FileSystemPlugin> FS =
PluginPrisms.fileSystemPlugin();
private static final Prism<Plugin, NetworkPlugin> NET =
PluginPrisms.networkPlugin();
private static final Prism<Plugin, ComputePlugin> COMPUTE =
PluginPrisms.computePlugin();
public Either<String, Result> execute(
Plugin plugin,
ExecutionContext context
) {
// Type-safe dispatch to appropriate handler
return DB.mapOptional(
dbPlugin -> context.getDatabaseContext()
.map(dbPlugin::execute)
.map(Either::<String, Result>right)
.orElse(Either.left("Database context not available")),
plugin
).or(() ->
FS.mapOptional(
fsPlugin -> context.getFileSystemContext()
.map(fsPlugin::execute)
.map(Either::<String, Result>right)
.orElse(Either.left("FileSystem context not available")),
plugin
)
).or(() ->
NET.mapOptional(
netPlugin -> context.getNetworkContext()
.map(netPlugin::execute)
.map(Either::<String, Result>right)
.orElse(Either.left("Network context not available")),
plugin
)
).or(() ->
COMPUTE.mapOptional(
computePlugin -> context.getComputeContext()
.map(computePlugin::execute)
.map(Either::<String, Result>right)
.orElse(Either.left("Compute context not available")),
plugin
)
).orElse(Either.left("Unknown plugin type"));
}
// Validate plugin before execution
public List<String> validate(Plugin plugin) {
List<String> errors = new ArrayList<>();
DB.mapOptional(p -> {
if (p.query().isEmpty()) {
errors.add("Database query cannot be empty");
}
return p;
}, plugin);
FS.mapOptional(p -> {
if (!Files.exists(p.path())) {
errors.add("File path does not exist: " + p.path());
}
return p;
}, plugin);
return errors;
}
}
Advanced: Plugin Composition
public class CompositePlugin {
// Combine multiple plugins into a pipeline
public static Plugin pipeline(List<Plugin> plugins) {
return new CompositePluginImpl(plugins);
}
// Filter plugins by type for batch operations
public static List<DatabasePlugin> getAllDatabasePlugins(List<Plugin> plugins) {
Prism<Plugin, DatabasePlugin> dbPrism = DB;
return plugins.stream()
.flatMap(p -> dbPrism.getOptional(p).stream())
.collect(Collectors.toList());
}
// Transform plugins based on environment
public static List<Plugin> adaptForEnvironment(
List<Plugin> plugins,
Environment env
) {
return plugins.stream()
.map(plugin ->
// Modify database plugins for different environments
DB.modifyWhen(
db -> env == Environment.PRODUCTION,
db -> new DatabasePlugin(
db.query(),
db.config().withReadReplica()
),
plugin
)
)
.collect(Collectors.toList());
}
}
- Capability detection: Use
matches()to check plugin capabilities - Fail-safe execution: Always handle unmatched plugin types
- Plugin validation: Use prisms to validate configuration before execution
- Metrics: Track plugin execution by type using prism-based routing
Performance Optimisation Patterns
Caching Composed Prisms
public class OptimisedPrismCache {
// Cache expensive optic compositions
private static final Map<String, Object> OPTIC_CACHE =
new ConcurrentHashMap<>();
@SuppressWarnings("unchecked")
public static <T> T getCached(
String key,
Supplier<T> factory
) {
return (T) OPTIC_CACHE.computeIfAbsent(key, k -> factory.get());
}
// Example usage: caching a composed traversal
private static final Traversal<Config, String> DATABASE_HOST =
getCached("config.database.host", () ->
ConfigLenses.database()
.asTraversal()
.andThen(Prisms.some().asTraversal())
.andThen(Prisms.right().asTraversal())
.andThen(DatabaseSettingsLenses.host().asTraversal())
);
}
Bulk Operations with Prisms
public class BulkProcessor {
// Process multiple items efficiently
public static <S, A> List<A> extractAll(
Prism<S, A> prism,
List<S> items
) {
return items.stream()
.flatMap(item -> prism.getOptional(item).stream())
.collect(Collectors.toList());
}
// Partition items by prism match
public static <S, A> Map<Boolean, List<S>> partitionByMatch(
Prism<S, A> prism,
List<S> items
) {
return items.stream()
.collect(Collectors.partitioningBy(prism::matches));
}
}
Testing Strategies
Testing Prism-Based Logic
public class PrismTestPatterns {
@Test
void testPrismMatching() {
Prism<ApiResponse, Success> success = ApiResponsePrisms.success();
ApiResponse successResponse = new Success(jsonData, 200);
ApiResponse errorResponse = new ServerError("Error", "trace123");
// Verify matching behaviour
assertTrue(success.matches(successResponse));
assertFalse(success.matches(errorResponse));
// Verify extraction
assertThat(success.getOptional(successResponse))
.isPresent()
.get()
.extracting(Success::statusCode)
.isEqualTo(200);
}
@Test
void testComposedPrisms() {
// Test deep prism compositions
Prism<Config, String> hostPrism = buildHostPrism();
Config validConfig = createValidConfig();
Config invalidConfig = createInvalidConfig();
assertThat(hostPrism.getOptional(validConfig)).isPresent();
assertThat(hostPrism.getOptional(invalidConfig)).isEmpty();
}
@Test
void testConditionalOperations() {
Prism<ConfigValue, IntValue> intPrism = ConfigValuePrisms.intValue();
ConfigValue value = new IntValue(42);
// Test modifyWhen
ConfigValue result = intPrism.modifyWhen(
i -> i.value() > 0,
i -> new IntValue(i.value() * 2),
value
);
assertThat(intPrism.getOptional(result))
.isPresent()
.get()
.extracting(IntValue::value)
.isEqualTo(84);
}
}
Further Reading
For deeper understanding of optics theory and applications:
- Profunctor Optics: Modular Data Accessors - Academic foundations
- Lens in Scala (Monocle) - Scala implementation and patterns
- Haskell Lens Library - Canonical reference
- A Little Lens Starter Tutorial - Beginner-friendly introduction
Previous: Prisms: A Practical Guide Next: Isomorphisms: Data Equivalence
Isomorphisms: A Practical Guide
Data Equivalence with Isos
- How to define lossless, reversible conversions between equivalent types
- Creating isomorphisms with
Iso.of(get, reverseGet) - Using
reverse()to flip conversion directions - Step-by-step transformation workflows for data format conversion
- Testing round-trip properties to ensure conversion correctness
- When to use isos vs direct conversion methods vs manual adapters
In the previous guides, we explored two essential optics: the Lens, for targeting data that must exist (a "has-a" relationship), and the Prism, for safely targeting data that might exist in a specific shape (an "is-a" relationship).
This leaves one final, fundamental question: what if you have two data types that are different in structure but hold the exact same information? How do you switch between them losslessly? For this, we need our final core optic: the Iso.
The Scenario: Translating Between Equivalent Types
An Iso (Isomorphism) is a "two-way street." It's an optic that represents a perfectly reversible, lossless conversion between two equivalent types. Think of it as a universal translator 🔄 or a type-safe adapter that you can compose with other optics.
An Iso is the right tool when you need to:
- Convert a wrapper type to its raw value (e.g.,
UserId(long id)<->long). - Handle data encoding and decoding (e.g.,
byte[]<->Base64 String). - Bridge two data structures that are informationally identical (e.g., a custom record and a generic tuple).
Let's explore that last case. Imagine we have a Point record and want to convert it to a generic Tuple2 to use with a library that operates on tuples.
The Data Model:
public record Point(int x, int y) {}
public record Tuple2<A, B>(A _1, B _2) {}
These two records can hold the same information. An Iso is the perfect way to formalize this relationship.
Think of Isos Like...
- A universal translator: Perfect two-way conversion between equivalent representations
- A reversible adapter: Converts between formats without losing information
- A bridge: Connects two different structures that represent the same data
- A currency exchange: Converts between equivalent values at a 1:1 rate
A Step-by-Step Walkthrough
Step 1: Defining an Iso
Unlike Lenses and Prisms, which are often generated from annotations, Isos are almost always defined manually. This is because the logic for converting between two types is unique to your specific domain.
You create an Iso using the static Iso.of(get, reverseGet) constructor.
import org.higherkindedj.optics.Iso;
import org.higherkindedj.hkt.tuple.Tuple;
import org.higherkindedj.hkt.tuple.Tuple2;
public class Converters {
public static Iso<Point, Tuple2<Integer, Integer>> pointToTuple() {
return Iso.of(
// Function to get the Tuple from the Point
point -> Tuple.of(point.x(), point.y()),
// Function to get the Point from the Tuple
tuple -> new Point(tuple._1(), tuple._2())
);
}
}
Step 2: The Core Iso Operations
An Iso provides two fundamental, lossless operations:
get(source): The "forward" conversion (e.g., fromPointtoTuple2).reverseGet(target): The "backward" conversion (e.g., fromTuple2back toPoint).
Furthermore, every Iso is trivially reversible using the .reverse() method, which returns a new Iso with the "get" and "reverseGet" functions swapped.
var pointToTupleIso = Converters.pointToTuple();
var myPoint = new Point(10, 20);
// Forward conversion
Tuple2<Integer, Integer> myTuple = pointToTupleIso.get(myPoint); // -> Tuple2[10, 20]
// Backward conversion using the reversed Iso
Point convertedBack = pointToTupleIso.reverse().get(myTuple); // -> Point[10, 20]
// Demonstrate perfect round-trip
assert myPoint.equals(convertedBack); // Always true for lawful Isos
Step 3: Composing Isos as a Bridge
The most powerful feature of an Iso is its ability to act as an adapter or "glue" between other optics. Because the conversion is lossless, an Iso preserves the "shape" of the optic it's composed with.
Iso + Iso = IsoIso + Lens = LensIso + Prism = PrismIso + Traversal = Traversal
This second rule is incredibly useful. We can compose our Iso<Point, Tuple2> with a Lens that operates on a Tuple2 to create a brand new Lens that operates directly on our Point!
// A standard Lens that gets the first element of any Tuple2
Lens<Tuple2<Integer, Integer>, Integer> tupleFirstElementLens = ...;
// The composition: Iso<Point, Tuple2> + Lens<Tuple2, Integer> = Lens<Point, Integer>
Lens<Point, Integer> pointToX = pointToTupleIso.andThen(tupleFirstElementLens);
// We can now use this new Lens to modify the 'x' coordinate of our Point
Point movedPoint = pointToX.modify(x -> x + 5, myPoint); // -> Point[15, 20]
The Iso acted as a bridge, allowing a generic Lens for tuples to work on our specific Point record.
When to Use Isos vs Other Approaches
Use Isos When:
- Data format conversion - Converting between equivalent representations
- Legacy system integration - Bridging old and new data formats
- Library interoperability - Adapting your types to work with external libraries
- Composable adapters - Building reusable conversion components
// Perfect for format conversion
Iso<LocalDate, String> dateStringIso = Iso.of(
date -> date.format(DateTimeFormatter.ISO_LOCAL_DATE),
dateStr -> LocalDate.parse(dateStr, DateTimeFormatter.ISO_LOCAL_DATE)
);
// Use with any date-focused lens
Lens<Person, String> birthDateStringLens =
PersonLenses.birthDate().andThen(dateStringIso);
Use Direct Conversion Methods When:
- One-way conversion - You don't need the reverse operation
- Non-lossless conversion - Information is lost in the conversion
- Performance critical paths - Minimal abstraction overhead needed
// Simple one-way conversion
String pointDescription = point.x() + "," + point.y();
Use Manual Adapters When:
- Complex conversion logic - Multi-step or conditional conversions
- Validation required - Conversion might fail
- Side effects needed - Logging, caching, etc.
// Complex conversion that might fail
public Optional<Point> parsePoint(String input) {
try {
String[] parts = input.split(",");
return Optional.of(new Point(
Integer.parseInt(parts[0].trim()),
Integer.parseInt(parts[1].trim())
));
} catch (Exception e) {
return Optional.empty();
}
}
Common Pitfalls
❌ Don't Do This:
// Lossy conversion - not a true isomorphism
Iso<Double, Integer> lossyIso = Iso.of(
d -> d.intValue(), // Loses decimal precision!
i -> i.doubleValue() // Can't recover original value
);
// One-way thinking - forgetting about reverseGet
Iso<Point, String> badPointIso = Iso.of(
point -> point.x() + "," + point.y(),
str -> new Point(0, 0) // Ignores the input!
);
// Creating Isos repeatedly instead of reusing
var iso1 = Iso.of(Point::x, x -> new Point(x, 0));
var iso2 = Iso.of(Point::x, x -> new Point(x, 0));
var iso3 = Iso.of(Point::x, x -> new Point(x, 0));
✅ Do This Instead:
// True isomorphism - perfect round-trip
Iso<Point, String> goodPointIso = Iso.of(
point -> point.x() + "," + point.y(),
str -> {
String[] parts = str.split(",");
return new Point(Integer.parseInt(parts[0]), Integer.parseInt(parts[1]));
}
);
// Test your isomorphisms
public static <A, B> void testIsomorphism(Iso<A, B> iso, A original) {
B converted = iso.get(original);
A roundTrip = iso.reverse().get(converted);
assert original.equals(roundTrip) : "Iso failed round-trip test";
}
// Reuse Isos as constants
public static final Iso<Point, Tuple2<Integer, Integer>> POINT_TO_TUPLE =
Iso.of(
point -> Tuple.of(point.x(), point.y()),
tuple -> new Point(tuple._1(), tuple._2())
);
Performance Notes
Isos are designed for efficient, lossless conversion:
- Zero overhead composition: Multiple Iso compositions are fused into single operations
- Lazy evaluation: Conversions only happen when needed
- Type safety: All conversions are checked at compile time
- Reusable: Isos can be stored and reused across your application
Best Practice: For frequently used conversions, create Isos as constants and test them:
public class DataIsos {
public static final Iso<UserId, Long> USER_ID_LONG =
Iso.of(UserId::value, UserId::new);
public static final Iso<Money, BigDecimal> MONEY_DECIMAL =
Iso.of(Money::amount, Money::new);
// Test your isos
static {
testIsomorphism(USER_ID_LONG, new UserId(12345L));
testIsomorphism(MONEY_DECIMAL, new Money(new BigDecimal("99.99")));
}
private static <A, B> void testIsomorphism(Iso<A, B> iso, A original) {
B converted = iso.get(original);
A roundTrip = iso.reverse().get(converted);
if (!original.equals(roundTrip)) {
throw new AssertionError("Iso failed round-trip test: " + original + " -> " + converted + " -> " + roundTrip);
}
}
}
Real-World Examples
1. API Data Transformation
// Internal model
public record Customer(String name, String email, LocalDate birthDate) {}
// External API model
public record CustomerDto(String fullName, String emailAddress, String birthDateString) {}
public class CustomerIsos {
public static final Iso<Customer, CustomerDto> CUSTOMER_DTO = Iso.of(
// Convert to DTO
customer -> new CustomerDto(
customer.name(),
customer.email(),
customer.birthDate().format(DateTimeFormatter.ISO_LOCAL_DATE)
),
// Convert from DTO
dto -> new Customer(
dto.fullName(),
dto.emailAddress(),
LocalDate.parse(dto.birthDateString(), DateTimeFormatter.ISO_LOCAL_DATE)
)
);
// Now any Customer lens can work with DTOs
public static final Lens<CustomerDto, String> DTO_NAME =
CUSTOMER_DTO.reverse().andThen(CustomerLenses.name()).andThen(CUSTOMER_DTO);
}
2. Configuration Format Conversion
// Different configuration representations
public record DatabaseConfig(String host, int port, String database) {}
public record ConnectionString(String value) {}
public class ConfigIsos {
public static final Iso<DatabaseConfig, ConnectionString> DB_CONNECTION = Iso.of(
// To connection string
config -> new ConnectionString(
"jdbc:postgresql://" + config.host() + ":" + config.port() + "/" + config.database()
),
// From connection string
conn -> {
// Simple parser for this example
String url = conn.value();
String[] parts = url.replace("jdbc:postgresql://", "").split("[:/]");
return new DatabaseConfig(parts[0], Integer.parseInt(parts[1]), parts[2]);
}
);
// Use with existing configuration lenses
public static final Lens<DatabaseConfig, String> CONNECTION_STRING_HOST =
DB_CONNECTION.andThen(
Lens.of(
cs -> cs.value().split("//")[1].split(":")[0],
(cs, host) -> new ConnectionString(cs.value().replaceFirst("//[^:]+:", "//" + host + ":"))
)
).andThen(DB_CONNECTION.reverse());
}
3. Wrapper Type Integration
// Strongly-typed wrappers
public record ProductId(UUID value) {}
public record CategoryId(UUID value) {}
public class WrapperIsos {
public static final Iso<ProductId, UUID> PRODUCT_ID_UUID =
Iso.of(ProductId::value, ProductId::new);
public static final Iso<CategoryId, UUID> CATEGORY_ID_UUID =
Iso.of(CategoryId::value, CategoryId::new);
// Use with any UUID-based operations
public static String formatProductId(ProductId id) {
return PRODUCT_ID_UUID
.andThen(Iso.of(UUID::toString, UUID::fromString))
.get(id);
}
}
Complete, Runnable Example
This example puts all the steps together to show both direct conversion and composition.
public class IsoUsageExample {
@GenerateLenses
public record Point(int x, int y) {}
@GenerateLenses
public record Circle(Point centre, int radius) {}
public static class Converters {
@GenerateIsos
public static Iso<Point, Tuple2<Integer, Integer>> pointToTuple() {
return Iso.of(
point -> Tuple.of(point.x(), point.y()),
tuple -> new Point(tuple._1(), tuple._2()));
}
// Additional useful Isos
public static final Iso<Point, String> POINT_STRING = Iso.of(
point -> point.x() + "," + point.y(),
str -> {
String[] parts = str.split(",");
return new Point(Integer.parseInt(parts[0]), Integer.parseInt(parts[1]));
}
);
}
// Test helper
private static <A, B> void testRoundTrip(Iso<A, B> iso, A original, String description) {
B converted = iso.get(original);
A roundTrip = iso.reverse().get(converted);
System.out.println(description + ":");
System.out.println(" Original: " + original);
System.out.println(" Converted: " + converted);
System.out.println(" Round-trip: " + roundTrip);
System.out.println(" Success: " + original.equals(roundTrip));
System.out.println();
}
public static void main(String[] args) {
// 1. Define a point and circle.
var myPoint = new Point(10, 20);
var myCircle = new Circle(myPoint, 5);
System.out.println("=== ISO USAGE EXAMPLE ===");
System.out.println("Original Point: " + myPoint);
System.out.println("Original Circle: " + myCircle);
System.out.println("------------------------------------------");
// 2. Get the generated Iso.
var pointToTupleIso = ConvertersIsos.pointToTuple;
// --- SCENARIO 1: Direct conversions and round-trip testing ---
System.out.println("--- Scenario 1: Direct Conversions ---");
testRoundTrip(pointToTupleIso, myPoint, "Point to Tuple conversion");
testRoundTrip(Converters.POINT_STRING, myPoint, "Point to String conversion");
// --- SCENARIO 2: Using reverse() ---
System.out.println("--- Scenario 2: Reverse Operations ---");
var tupleToPointIso = pointToTupleIso.reverse();
var myTuple = Tuple.of(30, 40);
Point pointFromTuple = tupleToPointIso.get(myTuple);
System.out.println("Tuple: " + myTuple + " -> Point: " + pointFromTuple);
System.out.println();
// --- SCENARIO 3: Composition with lenses ---
System.out.println("--- Scenario 3: Composition with Lenses ---");
// Create a lens manually that works with Point directly
Lens<Point, Integer> pointToXLens = Lens.of(
Point::x,
(point, newX) -> new Point(newX, point.y())
);
// Use the lens
Point movedPoint = pointToXLens.modify(x -> x + 5, myPoint);
System.out.println("Original point: " + myPoint);
System.out.println("After moving X by 5: " + movedPoint);
System.out.println();
// --- SCENARIO 4: Demonstrating Iso composition ---
System.out.println("--- Scenario 4: Iso Composition ---");
// Show how the Iso can be used to convert and work with tuples
Tuple2<Integer, Integer> tupleRepresentation = pointToTupleIso.get(myPoint);
System.out.println("Point as tuple: " + tupleRepresentation);
// Modify the tuple using tuple operations
Lens<Tuple2<Integer, Integer>, Integer> tupleFirstLens = Tuple2Lenses._1();
Tuple2<Integer, Integer> modifiedTuple = tupleFirstLens.modify(x -> x * 2, tupleRepresentation);
// Convert back to Point
Point modifiedPoint = pointToTupleIso.reverse().get(modifiedTuple);
System.out.println("Modified tuple: " + modifiedTuple);
System.out.println("Back to point: " + modifiedPoint);
System.out.println();
// --- SCENARIO 5: String format conversions ---
System.out.println("--- Scenario 5: String Format Conversions ---");
String pointAsString = Converters.POINT_STRING.get(myPoint);
System.out.println("Point as string: " + pointAsString);
Point recoveredFromString = Converters.POINT_STRING.reverse().get(pointAsString);
System.out.println("Recovered from string: " + recoveredFromString);
System.out.println("Perfect round-trip: " + myPoint.equals(recoveredFromString));
// --- SCENARIO 6: Working with Circle centre through Iso ---
System.out.println("--- Scenario 6: Circle Centre Manipulation ---");
// Get the centre as a tuple, modify it, and put it back
Point originalCentre = myCircle.centre();
Tuple2<Integer, Integer> centreAsTuple = pointToTupleIso.get(originalCentre);
Tuple2<Integer, Integer> shiftedCentre = Tuple.of(centreAsTuple._1() + 10, centreAsTuple._2() + 10);
Point newCentre = pointToTupleIso.reverse().get(shiftedCentre);
Circle newCircle = CircleLenses.centre().set(newCentre, myCircle);
System.out.println("Original circle: " + myCircle);
System.out.println("Centre as tuple: " + centreAsTuple);
System.out.println("Shifted centre tuple: " + shiftedCentre);
System.out.println("New circle: " + newCircle);
}
Expected Output:
=== ISO USAGE EXAMPLE ===
Original Point: Point[x=10, y=20]
Original Circle: Circle[centre=Point[x=10, y=20], radius=5]
------------------------------------------
--- Scenario 1: Direct Conversions ---
Point to Tuple conversion:
Original: Point[x=10, y=20]
Converted: Tuple2[_1=10, _2=20]
Round-trip: Point[x=10, y=20]
Success: true
Point to String conversion:
Original: Point[x=10, y=20]
Converted: 10,20
Round-trip: Point[x=10, y=20]
Success: true
--- Scenario 2: Reverse Operations ---
Tuple: Tuple2[_1=30, _2=40] -> Point: Point[x=30, y=40]
--- Scenario 3: Working with Different Representations ---
Original point: Point[x=10, y=20]
After moving X by 5: Point[x=15, y=20]
--- Scenario 4: Conversion Workflows ---
Point as tuple: Tuple2[_1=10, _2=20]
Modified tuple: Tuple2[_1=20, _2=20]
Back to point: Point[x=20, y=20]
--- Scenario 5: String Format Conversions ---
Point as string: 10,20
Recovered from string: Point[x=10, y=20]
Perfect round-trip: true
--- Scenario 6: Circle Centre Manipulation ---
Original circle: Circle[centre=Point[x=10, y=20], radius=5]
Centre as tuple: Tuple2[_1=10, _2=20]
Shifted centre tuple: Tuple2[_1=20, _2=30]
New circle: Circle[centre=Point[x=20, y=30], radius=5]
Why Isos are a Powerful Bridge
Lens, Prism, and Iso form a powerful trio for modelling any data operation. An Iso is the essential bridge that enables you to:
- Work with the Best Representation: Convert data to the most suitable format for each operation, then convert back when needed.
- Enable Library Integration: Adapt your internal data types to work seamlessly with external libraries without changing your core domain model.
- Maintain Type Safety: All conversions are checked at compile time, eliminating runtime conversion errors.
- Build Reusable Converters: Create tested, reusable conversion components that can be used throughout your application.
The step-by-step conversion approach shown in the examples is the most practical way to use Isos in real applications, providing clear, maintainable code that leverages the strengths of different data representations.
Previous: Advanced Prism Patterns Next: Traversals: Handling Bulk Updates
Traversals: Practical Guide
Handling Bulk Updates
- How to perform bulk operations on collections within immutable structures
- Using
@GenerateTraversalsfor automatic collection optics - Composing traversals with lenses and prisms for deep bulk updates
- The
Traversals.modify()andTraversals.getAll()utility methods - Understanding zero-or-more target semantics
- When to use traversals vs streams vs manual loops for collection processing
So far, our journey through optics has shown us how to handle singular focus:
- A
Lenstargets a part that must exist. - A
Prismtargets a part that might exist in one specific shape. - An
Isoprovides a two-way bridge between equivalent types.
But what about operating on many items at once? How do we apply a single change to every element in a nested list? For this, we need the most general and powerful optic in our toolkit: the Traversal.
The Scenario: Updating an Entire League 🗺️
A Traversal is a functional "search-and-replace." It gives you a single tool to focus on zero or more items within a larger structure, allowing you to get, set, or modify all of them in one go.
This makes it the perfect optic for working with collections. Consider this data model of a sports league:
The Data Model:
public record Player(String name, int score) {}
public record Team(String name, List<Player> players) {}
public record League(String name, List<Team> teams) {}
Our Goal: We need to give every single player in the entire league 5 bonus points. The traditional approach involves nested loops or streams, forcing us to manually reconstruct each immutable object along the way.
// Manual, verbose bulk update
List<Team> newTeams = league.teams().stream()
.map(team -> {
List<Player> newPlayers = team.players().stream()
.map(player -> new Player(player.name(), player.score() + 5))
.collect(Collectors.toList());
return new Team(team.name(), newPlayers);
})
.collect(Collectors.toList());
League updatedLeague = new League(league.name(), newTeams);
This code is deeply nested and mixes the what (add 5 to a score) with the how (looping, collecting, and reconstructing). A Traversal lets us abstract away the "how" completely.
Think of Traversals Like...
- A spotlight: Illuminates many targets at once within a structure
- A search-and-replace tool: Finds all matching items and transforms them
- A bulk editor: Applies the same operation to multiple items efficiently
- A magnifying glass array: Like a lens, but for zero-to-many targets instead of exactly one
A Step-by-Step Walkthrough
Step 1: Generating Traversals
The library provides a rich set of tools for creating Traversal instances, found in the Traversals utility class and through annotations.
@GenerateTraversals: Annotating a record will automatically generate aTraversalfor anyIterablefield (likeListorSet).Traversals.forList(): A static helper that creates a traversal for the elements of aList.Traversals.forMap(key): A static helper that creates a traversal focusing on the value for a specific key in aMap.
import org.higherkindedj.optics.annotations.GenerateTraversals;
import java.util.List;
// We also add @GenerateLenses to get access to player fields
@GenerateLenses
public record Player(String name, int score) {}
@GenerateLenses
@GenerateTraversals // Traversal for List<Player>
public record Team(String name, List<Player> players) {}
@GenerateLenses
@GenerateTraversals // Traversal for List<Team>
public record League(String name, List<Team> teams) {}
Step 2: Composing a Deep Traversal
Just like other optics, Traversals can be composed with andThen. We can chain them together to create a single, deep traversal from the League all the way down to each player's score.
// Get generated optics
Traversal<League, Team> leagueToTeams = LeagueTraversals.teams();
Traversal<Team, Player> teamToPlayers = TeamTraversals.players();
Lens<Player, Integer> playerToScore = PlayerLenses.score();
// Compose them to create a single, deep traversal.
Traversal<League, Integer> leagueToAllPlayerScores =
leagueToTeams
.andThen(teamToPlayers)
.andThen(playerToScore.asTraversal()); // Convert the final Lens
The result is a single Traversal<League, Integer> that declaratively represents the path to all player scores.
Step 3: Using the Traversal with Helper Methods
The Traversals utility class provides convenient helper methods to perform the most common operations.
Traversals.modify(traversal, function, source): Applies a pure function to all targets of a traversal.
// Use the composed traversal to add 5 bonus points to every score.
League updatedLeague = Traversals.modify(leagueToAllPlayerScores, score -> score + 5, league);
Traversals.getAll(traversal, source): Extracts all targets of a traversal into aList.
// Get a flat list of all player scores in the league.
List<Integer> allScores = Traversals.getAll(leagueToAllPlayerScores, league);
// Result: [100, 90, 110, 120]
When to Use Traversals vs Other Approaches
Use Traversals When:
- Bulk operations on nested collections - Applying the same operation to many items
- Type-safe collection manipulation - Working with collections inside immutable structures
- Reusable bulk logic - Creating operations that can be applied across different instances
- Effectful operations - Using
modifyFfor operations that might fail or have side effects
// Perfect for bulk updates with type safety
Traversal<Company, String> allEmails = CompanyTraversals.employees()
.andThen(EmployeeTraversals.contacts())
.andThen(ContactLenses.email().asTraversal());
Company withNormalisedEmails = Traversals.modify(allEmails, String::toLowerCase, company);
Use Streams When:
- Complex transformations - Multiple operations that don't map cleanly to traversals
- Filtering and collecting - You need to change the collection structure
- Performance critical paths - Minimal abstraction overhead needed
// Better with streams for complex logic
List<String> activePlayerNames = league.teams().stream()
.flatMap(team -> team.players().stream())
.filter(player -> player.score() > 50)
.map(Player::name)
.sorted()
.collect(toList());
Use Manual Loops When:
- Early termination needed - You might want to stop processing early
- Complex control flow - Multiple conditions and branches
- Imperative mindset - The operation is inherently procedural
// Sometimes a loop is clearest
for (Team team : league.teams()) {
for (Player player : team.players()) {
if (player.score() < 0) {
throw new IllegalStateException("Negative score found: " + player);
}
}
}
Common Pitfalls
❌ Don't Do This:
// Inefficient: Creating traversals repeatedly
teams.forEach(team -> {
var traversal = TeamTraversals.players().andThen(PlayerLenses.score().asTraversal());
Traversals.modify(traversal, score -> score + 1, team);
});
// Over-engineering: Using traversals for simple cases
Traversal<Player, String> playerName = PlayerLenses.name().asTraversal();
String name = Traversals.getAll(playerName, player).get(0); // Just use player.name()!
// Type confusion: Forgetting that traversals work on zero-or-more targets
League emptyLeague = new League("Empty", List.of());
List<Integer> scores = Traversals.getAll(leagueToAllPlayerScores, emptyLeague); // Returns empty list
✅ Do This Instead:
// Efficient: Create traversals once, use many times
var scoreTraversal = LeagueTraversals.teams()
.andThen(TeamTraversals.players())
.andThen(PlayerLenses.score().asTraversal());
League bonusLeague = Traversals.modify(scoreTraversal, score -> score + 5, league);
League doubledLeague = Traversals.modify(scoreTraversal, score -> score * 2, league);
// Right tool for the job: Use direct access for single items
String playerName = player.name(); // Simple and clear
// Defensive: Handle empty collections gracefully
List<Integer> allScores = Traversals.getAll(scoreTraversal, league);
OptionalDouble average = allScores.stream().mapToInt(Integer::intValue).average();
Performance Notes
Traversals are optimised for immutable updates:
- Memory efficient: Only creates new objects along the path that changes
- Lazy evaluation: Stops early if no changes are needed
- Batch operations:
modifyFprocesses all targets in a single pass - Structural sharing: Unchanged parts of the data structure are reused
Best Practice: For frequently used traversal combinations, create them once and store as constants:
public class LeagueOptics {
public static final Traversal<League, Integer> ALL_PLAYER_SCORES =
LeagueTraversals.teams()
.andThen(TeamTraversals.players())
.andThen(PlayerLenses.score().asTraversal());
public static final Traversal<League, String> ALL_PLAYER_NAMES =
LeagueTraversals.teams()
.andThen(TeamTraversals.players())
.andThen(PlayerLenses.name().asTraversal());
}
Common Patterns
Validation with Error Accumulation
// Validate all email addresses in a userLogin list
Traversal<Company, String> allEmails = CompanyTraversals.employees()
.andThen(EmployeeTraversals.contactInfo())
.andThen(ContactInfoLenses.email().asTraversal());
Function<String, Kind<ValidatedKind.Witness<List<String>>, String>> validateEmail =
email -> email.contains("@")
? VALIDATED.widen(Validated.valid(email))
: VALIDATED.widen(Validated.invalid(List.of("Invalid email: " + email)));
Validated<List<String>, Company> result = VALIDATED.narrow(
allEmails.modifyF(validateEmail, company, validatedApplicative)
);
Conditional Updates
// Give bonus points only to high-performing players
Function<Integer, Integer> conditionalBonus = score ->
score >= 80 ? score + 10 : score;
League bonusLeague = Traversals.modify(
LeagueOptics.ALL_PLAYER_SCORES,
conditionalBonus,
league
);
Data Transformation
// Normalise all player names to title case
Function<String, String> titleCase = name ->
Arrays.stream(name.toLowerCase().split(" "))
.map(word -> word.substring(0, 1).toUpperCase() + word.substring(1))
.collect(joining(" "));
League normalisedLeague = Traversals.modify(
LeagueOptics.ALL_PLAYER_NAMES,
titleCase,
league
);
Asynchronous Operations
// Fetch additional player statistics asynchronously
Function<Integer, CompletableFuture<Integer>> fetchBonusPoints =
playerId -> statsService.getBonusPoints(playerId);
CompletableFuture<League> enrichedLeague = CF.narrow(
LeagueOptics.ALL_PLAYER_SCORES.modifyF(
score -> CF.widen(fetchBonusPoints.apply(score)),
league,
CompletableFutureMonad.INSTANCE
)
);
List Manipulation with partsOf
Treating Traversal Focuses as Collections
- Converting a Traversal into a Lens on a List of elements
- Using
partsOffor sorting, reversing, and deduplicating focused elements - Convenience methods:
sorted,reversed,distinct - Understanding size mismatch behaviour and graceful degradation
- When list-level operations on traversal targets are appropriate
So far, we've seen how traversals excel at applying the same operation to every focused element individually. But what if you need to perform operations that consider all focuses as a group? Sorting, reversing, or removing duplicates are inherently list-level operations—they require knowledge of the entire collection, not just individual elements.
This is where partsOf becomes invaluable. It bridges the gap between element-wise traversal operations and collection-level algorithms.
Think of partsOf Like...
- A "collect and redistribute" operation: Gather all targets, transform them as a group, then put them back
- A camera taking a snapshot: Capture all focused elements, edit the photo, then overlay the changes
- A postal sorting centre: Collect all parcels, sort them efficiently, then redistribute to addresses
- The bridge between trees and lists: Temporarily flatten a structure for list operations, then restore the shape
The Problem: Element-Wise Limitations
Consider this scenario: you have a catalogue of products across multiple categories, and you want to sort all prices from lowest to highest. With standard traversal operations, you're stuck:
// This doesn't work - modify operates on each element independently
Traversal<Catalogue, Double> allPrices = CatalogueTraversals.categories()
.andThen(CategoryTraversals.products())
.andThen(ProductLenses.price().asTraversal());
// ❌ This sorts nothing - each price is transformed in isolation
Catalogue result = Traversals.modify(allPrices, price -> price, catalogue);
// Prices remain in original order!
The traversal has no way to "see" all prices simultaneously. Each element is processed independently, making sorting impossible.
The Solution: partsOf
The partsOf combinator transforms a Traversal<S, A> into a Lens<S, List<A>>, allowing you to:
- Get: Extract all focused elements as a single list
- Manipulate: Apply any list operation (sort, reverse, filter, etc.)
- Set: Distribute the modified elements back to their original positions
// Convert traversal to a lens on the list of all prices
Lens<Catalogue, List<Double>> pricesLens = Traversals.partsOf(allPrices);
// Get all prices as a list
List<Double> allPricesList = pricesLens.get(catalogue);
// Result: [999.99, 499.99, 799.99, 29.99, 49.99, 19.99]
// Sort the list
List<Double> sortedPrices = new ArrayList<>(allPricesList);
Collections.sort(sortedPrices);
// Result: [19.99, 29.99, 49.99, 499.99, 799.99, 999.99]
// Set the sorted prices back
Catalogue sortedCatalogue = pricesLens.set(sortedPrices, catalogue);
The Magic: The sorted prices are distributed back to the original positions in the structure. The first product gets the lowest price, the second product gets the second-lowest, and so on—regardless of which category they belong to.
Convenience Methods
The Traversals utility class provides convenience methods that combine partsOf with common list operations:
sorted - Natural Ordering
Traversal<List<Product>, Double> priceTraversal =
Traversals.<Product>forList().andThen(ProductLenses.price().asTraversal());
// Sort prices in ascending order
List<Product> sortedProducts = Traversals.sorted(priceTraversal, products);
sorted - Custom Comparator
Traversal<List<Product>, String> nameTraversal =
Traversals.<Product>forList().andThen(ProductLenses.name().asTraversal());
// Sort names case-insensitively
List<Product> sortedByName = Traversals.sorted(
nameTraversal,
String.CASE_INSENSITIVE_ORDER,
products
);
// Sort by name length
List<Product> sortedByLength = Traversals.sorted(
nameTraversal,
Comparator.comparingInt(String::length),
products
);
reversed - Invert Order
Traversal<Project, Integer> priorityTraversal =
ProjectTraversals.tasks().andThen(TaskLenses.priority().asTraversal());
// Reverse all priorities
Project reversedProject = Traversals.reversed(priorityTraversal, project);
// Useful for: inverting priority schemes, LIFO ordering, undo stacks
distinct - Remove Duplicates
Traversal<List<Product>, String> tagTraversal =
Traversals.<Product>forList().andThen(ProductLenses.tag().asTraversal());
// Remove duplicate tags (preserves first occurrence)
List<Product> deduplicatedProducts = Traversals.distinct(tagTraversal, products);
Understanding Size Mismatch Behaviour
A crucial aspect of partsOf is how it handles size mismatches between the new list and the number of target positions:
Fewer elements than positions: Original values are preserved in remaining positions.
// Original: 5 products with prices [100, 200, 300, 400, 500]
List<Double> partialPrices = List.of(10.0, 20.0, 30.0); // Only 3 values
List<Product> result = pricesLens.set(partialPrices, products);
// Result prices: [10.0, 20.0, 30.0, 400, 500]
// First 3 updated, last 2 unchanged
More elements than positions: Extra elements are ignored.
// Original: 3 products
List<Double> extraPrices = List.of(10.0, 20.0, 30.0, 40.0, 50.0); // 5 values
List<Product> result = pricesLens.set(extraPrices, products);
// Result: Only first 3 prices used, 40.0 and 50.0 ignored
This graceful degradation makes partsOf safe to use even when you're not certain about the exact number of targets.
Lens Laws Compliance
The partsOf combinator produces a lawful Lens when the list sizes match:
- Get-Set Law:
set(get(s), s) = s✓ - Set-Get Law:
get(set(a, s)) = a✓ (whena.size() = targets) - Set-Set Law:
set(b, set(a, s)) = set(b, s)✓
When sizes don't match, the laws still hold for the elements that are provided.
Advanced Use Cases
Combining with Filtered Traversals
// Sort only in-stock product prices
Traversal<List<Product>, Double> inStockPrices =
Traversals.<Product>forList()
.filtered(p -> p.stockLevel() > 0)
.andThen(ProductLenses.price().asTraversal());
List<Product> result = Traversals.sorted(inStockPrices, products);
// Out-of-stock products unchanged, in-stock prices sorted
Custom List Algorithms
Lens<Catalogue, List<Double>> pricesLens = Traversals.partsOf(allPrices);
List<Double> prices = new ArrayList<>(pricesLens.get(catalogue));
// Apply any list algorithm:
Collections.shuffle(prices); // Randomise
Collections.rotate(prices, 3); // Circular rotation
prices.sort(Comparator.reverseOrder()); // Descending sort
prices.removeIf(p -> p < 10.0); // Filter (with caveats)
Performance Considerations
partsOf operations traverse the structure twice:
- Once for
get: Collect all focused elements - Once for
set: Distribute modified elements back
For very large structures with thousands of focuses, consider:
- Caching the lens if used repeatedly
- Using direct stream operations if structure preservation isn't required
- Profiling to ensure the abstraction overhead is acceptable
Best Practice: Create the partsOf lens once and reuse it:
public class CatalogueOptics {
private static final Traversal<Catalogue, Double> ALL_PRICES =
CatalogueTraversals.categories()
.andThen(CategoryTraversals.products())
.andThen(ProductLenses.price().asTraversal());
public static final Lens<Catalogue, List<Double>> PRICES_AS_LIST =
Traversals.partsOf(ALL_PRICES);
}
Common Pitfalls with partsOf
❌ Don't Do This:
// Expecting distinct to reduce structure size
List<Product> products = List.of(
new Product("Widget", 25.99),
new Product("Gadget", 49.99),
new Product("Widget", 30.00) // Duplicate name
);
// This doesn't remove the third product!
List<Product> result = Traversals.distinct(nameTraversal, products);
// The new list of distinct names is shorter, so the third product keeps its original name.
// Wrong: Using partsOf when you need element-wise operations
Lens<List<Product>, List<Double>> lens = Traversals.partsOf(priceTraversal);
List<Double> prices = lens.get(products);
prices.forEach(p -> System.out.println(p)); // Just use Traversals.getAll()!
✅ Do This Instead:
// Understand that structure is preserved, only values redistribute
List<Product> result = Traversals.distinct(nameTraversal, products);
// Third product keeps original price, gets redistributed unique name
// Use partsOf when you need list-level operations
Lens<List<Product>, List<Double>> lens = Traversals.partsOf(priceTraversal);
List<Double> prices = new ArrayList<>(lens.get(products));
Collections.sort(prices); // True list operation
lens.set(prices, products);
// For simple iteration, use getAll
Traversals.getAll(priceTraversal, products).forEach(System.out::println);
When to Use partsOf
Use partsOf when:
- Sorting focused elements by their values
- Reversing the order of focused elements
- Removing duplicates whilst preserving structure
- Applying list algorithms that require seeing all elements at once
- Redistributing values across positions (e.g., load balancing)
Avoid partsOf when:
- Simple iteration suffices (use
getAll) - Element-wise transformation is needed (use
modify) - You need to change the structure itself (use streams/filtering)
- Performance is critical and structure is very large
Real-World Example: Configuration Validation
// Configuration model
@GenerateLenses
@GenerateTraversals
public record ServerConfig(String name, List<DatabaseConfig> databases) {}
@GenerateLenses
public record DatabaseConfig(String host, int port, String name) {}
// Validation traversal
public class ConfigValidation {
private static final Traversal<ServerConfig, Integer> ALL_DB_PORTS =
ServerConfigTraversals.databases()
.andThen(DatabaseConfigLenses.port().asTraversal());
public static Validated<List<String>, ServerConfig> validateConfig(ServerConfig config) {
Function<Integer, Kind<ValidatedKind.Witness<List<String>>, Integer>> validatePort =
port -> {
if (port >= 1024 && port <= 65535) {
return VALIDATED.widen(Validated.valid(port));
} else {
return VALIDATED.widen(Validated.invalid(
List.of("Port " + port + " is out of valid range (1024-65535)")
));
}
};
return VALIDATED.narrow(
ALL_DB_PORTS.modifyF(
validatePort,
config,
ValidatedMonad.instance(Semigroups.list())
)
);
}
}
Complete, Runnable Example
This example demonstrates how to use the with* helpers for a targeted update and how to use a composed Traversal with the new utility methods for bulk operations.
package org.higherkindedj.example.optics;
import java.util.ArrayList;
import java.util.List;
import org.higherkindedj.optics.Lens;
import org.higherkindedj.optics.Traversal;
import org.higherkindedj.optics.annotations.GenerateLenses;
import org.higherkindedj.optics.annotations.GenerateTraversals;
import org.higherkindedj.optics.util.Traversals;
public class TraversalUsageExample {
@GenerateLenses
public record Player(String name, int score) {}
@GenerateLenses
@GenerateTraversals
public record Team(String name, List<Player> players) {}
@GenerateLenses
@GenerateTraversals
public record League(String name, List<Team> teams) {}
public static void main(String[] args) {
var team1 = new Team("Team Alpha", List.of(
new Player("Alice", 100),
new Player("Bob", 90)
));
var team2 = new Team("Team Bravo", List.of(
new Player("Charlie", 110),
new Player("Diana", 120)
));
var league = new League("Pro League", List.of(team1, team2));
System.out.println("=== TRAVERSAL USAGE EXAMPLE ===");
System.out.println("Original League: " + league);
System.out.println("------------------------------------------");
// --- SCENARIO 1: Using `with*` helpers for a targeted, shallow update ---
System.out.println("--- Scenario 1: Shallow Update with `with*` Helpers ---");
var teamToUpdate = league.teams().get(0);
var updatedTeam = TeamLenses.withName(teamToUpdate, "Team Omega");
var newTeamsList = new ArrayList<>(league.teams());
newTeamsList.set(0, updatedTeam);
var leagueWithUpdatedTeam = LeagueLenses.withTeams(league, newTeamsList);
System.out.println("After updating one team's name:");
System.out.println(leagueWithUpdatedTeam);
System.out.println("------------------------------------------");
// --- SCENARIO 2: Using composed Traversals for deep, bulk updates ---
System.out.println("--- Scenario 2: Bulk Updates with Composed Traversals ---");
// Create the composed traversal
Traversal<League, Integer> leagueToAllPlayerScores =
LeagueTraversals.teams()
.andThen(TeamTraversals.players())
.andThen(PlayerLenses.score().asTraversal());
// Use the `modify` helper to add 5 bonus points to every score.
League updatedLeague = Traversals.modify(leagueToAllPlayerScores, score -> score + 5, league);
System.out.println("After adding 5 bonus points to all players:");
System.out.println(updatedLeague);
System.out.println();
// --- SCENARIO 3: Extracting data with `getAll` ---
System.out.println("--- Scenario 3: Data Extraction ---");
List<Integer> allScores = Traversals.getAll(leagueToAllPlayerScores, league);
System.out.println("All player scores: " + allScores);
System.out.println("Total players: " + allScores.size());
System.out.println("Average score: " + allScores.stream().mapToInt(Integer::intValue).average().orElse(0.0));
System.out.println();
// --- SCENARIO 4: Conditional updates ---
System.out.println("--- Scenario 4: Conditional Updates ---");
// Give bonus points only to players with scores >= 100
League bonusLeague = Traversals.modify(
leagueToAllPlayerScores,
score -> score >= 100 ? score + 20 : score,
league
);
System.out.println("After conditional bonus (20 points for scores >= 100):");
System.out.println(bonusLeague);
System.out.println();
// --- SCENARIO 5: Multiple traversals ---
System.out.println("--- Scenario 5: Multiple Traversals ---");
// Create a traversal for player names
Traversal<League, String> leagueToAllPlayerNames =
LeagueTraversals.teams()
.andThen(TeamTraversals.players())
.andThen(PlayerLenses.name().asTraversal());
// Normalise all names to uppercase
League upperCaseLeague = Traversals.modify(leagueToAllPlayerNames, String::toUpperCase, league);
System.out.println("After converting all names to uppercase:");
System.out.println(upperCaseLeague);
System.out.println();
// --- SCENARIO 6: Working with empty collections ---
System.out.println("--- Scenario 6: Empty Collections ---");
League emptyLeague = new League("Empty League", List.of());
List<Integer> emptyScores = Traversals.getAll(leagueToAllPlayerScores, emptyLeague);
League emptyAfterUpdate = Traversals.modify(leagueToAllPlayerScores, score -> score + 100, emptyLeague);
System.out.println("Empty league: " + emptyLeague);
System.out.println("Scores from empty league: " + emptyScores);
System.out.println("Empty league after update: " + emptyAfterUpdate);
System.out.println("------------------------------------------");
System.out.println("Original league unchanged: " + league);
}
}
Expected Output:
=== TRAVERSAL USAGE EXAMPLE ===
Original League: League[name=Pro League, teams=[Team[name=Team Alpha, players=[Player[name=Alice, score=100], Player[name=Bob, score=90]]], Team[name=Team Bravo, players=[Player[name=Charlie, score=110], Player[name=Diana, score=120]]]]]
------------------------------------------
--- Scenario 1: Shallow Update with `with*` Helpers ---
After updating one team's name:
League[name=Pro League, teams=[Team[name=Team Omega, players=[Player[name=Alice, score=100], Player[name=Bob, score=90]]], Team[name=Team Bravo, players=[Player[name=Charlie, score=110], Player[name=Diana, score=120]]]]]
------------------------------------------
--- Scenario 2: Bulk Updates with Composed Traversals ---
After adding 5 bonus points to all players:
League[name=Pro League, teams=[Team[name=Team Alpha, players=[Player[name=Alice, score=105], Player[name=Bob, score=95]]], Team[name=Team Bravo, players=[Player[name=Charlie, score=115], Player[name=Diana, score=125]]]]]
--- Scenario 3: Data Extraction ---
All player scores: [100, 90, 110, 120]
Total players: 4
Average score: 105.0
--- Scenario 4: Conditional Updates ---
After conditional bonus (20 points for scores >= 100):
League[name=Pro League, teams=[Team[name=Team Alpha, players=[Player[name=Alice, score=120], Player[name=Bob, score=90]]], Team[name=Team Bravo, players=[Player[name=Charlie, score=130], Player[name=Diana, score=140]]]]]
--- Scenario 5: Multiple Traversals ---
After converting all names to uppercase:
League[name=Pro League, teams=[Team[name=Team Alpha, players=[Player[name=ALICE, score=100], Player[name=BOB, score=90]]], Team[name=Team Bravo, players=[Player[name=CHARLIE, score=110], Player[name=DIANA, score=120]]]]]
--- Scenario 6: Working with Empty Collections ---
Empty league: League[name=Empty League, teams=[]]
Scores from empty league: []
Empty league after update: League[name=Empty League, teams=[]]
------------------------------------------
Original league unchanged: League[name=Pro League, teams=[Team[name=Team Alpha, players=[Player[name=Alice, score=100], Player[name=Bob, score=90]]], Team[name=Team Bravo, players=[Player[name=Charlie, score=110], Player[name=Diana, score=120]]]]]
Unifying the Concepts
A Traversal is the most general of the core optics. In fact, all other optics can be seen as specialised Traversals:
- A
Lensis just aTraversalthat always focuses on exactly one item. - A
Prismis just aTraversalthat focuses on zero or one item. - An
Isois just aTraversalthat focuses on exactly one item and is reversible.
This is the reason they can all be composed together so seamlessly. By mastering Traversal, you complete your understanding of the core optics family, enabling you to build powerful, declarative, and safe data transformations that work efficiently across any number of targets.
Previous: Isomorphisms: Data Equivalence Next: Folds: Querying Immutable Data
Folds: A Practical Guide
Querying Immutable Data
- How to query and extract data from complex structures without modification
- Using
@GenerateFoldsto create type-safe query optics automatically - Understanding the relationship between Fold and the Foldable type class
- Aggregating data with monoids for sums, products, and custom combiners
- Composing folds with other optics for deep, conditional queries
- The difference between
getAll,preview,find,exists,all, andlength - Maybe-based extensions for functional optional handling (
previewMaybe,findMaybe,getAllMaybe) - When to use Fold vs Traversal vs direct field access vs Stream API
- Building read-only data processing pipelines with clear intent
In previous guides, we explored optics that allow both reading and writing: Lens for required fields, Prism for conditional variants, Iso for lossless conversions, and Traversal for bulk operations on collections.
But what if you need to perform read-only operations? What if you want to query, search, filter, or aggregate data without any possibility of modification? This is where Fold shines.
The Scenario: Analysing E-Commerce Orders
A Fold is a read-only optic designed specifically for querying and data extraction. Think of it as a database query tool 🔍 or a telescope 🔭 that lets you peer into your data structures, extract information, and aggregate results—all without the ability to modify anything.
Consider an e-commerce system where you need to analyse orders:
The Data Model:
@GenerateLenses
public record Product(String name, double price, String category, boolean inStock) {}
@GenerateLenses
@GenerateFolds // Generate Folds for querying
public record Order(String orderId, List<Product> items, String customerName) {}
@GenerateLenses
@GenerateFolds
public record OrderHistory(List<Order> orders) {}
Common Query Needs:
- "Find all products in this order"
- "Get the first product or empty if none"
- "Check if any product is out of stock"
- "Count how many items are in the order"
- "Calculate the total price of all items"
- "Check if all items are under £100"
A Fold makes these queries type-safe, composable, and expressive.
Think of Folds Like...
- A database query: Extracting specific data from complex structures
- A read-only telescope: Magnifying and examining data without touching it
- A search engine: Finding and collecting information efficiently
- An aggregation pipeline: Combining values according to rules (via monoids)
- A reporter: Summarising data into useful metrics
Fold vs Traversal: Understanding the Difference
Before we dive deeper, it's crucial to understand how Fold relates to Traversal:
| Aspect | Traversal | Fold |
|---|---|---|
| Purpose | Read and modify collections | Read-only queries |
| Can modify? | ✅ Yes (set, modify) | ❌ No |
| Query operations | ✅ Yes (via getAll, but not primary purpose) | ✅ Yes (designed for this) |
| Intent clarity | "I might modify this" | "I'm only reading this" |
| Conversion | Can be converted to Fold via asFold() | Cannot be converted to Traversal |
| Use cases | Bulk updates, validation with modifications | Queries, searches, aggregations |
Key Insight: Every Traversal can be viewed as a Fold (read-only subset), but not every Fold can be a Traversal. By choosing Fold when you only need reading, you make your code's intent clear and prevent accidental modifications.
A Step-by-Step Walkthrough
Step 1: Generating Folds
Just like with other optics, we use annotations to trigger automatic code generation. Annotating a record with @GenerateFolds creates a companion class (e.g., OrderFolds) containing a Fold for each field.
import org.higherkindedj.optics.annotations.GenerateFolds;
import org.higherkindedj.optics.annotations.GenerateLenses;
import java.util.List;
@GenerateLenses
public record Product(String name, double price, String category, boolean inStock) {}
@GenerateLenses
@GenerateFolds
public record Order(String orderId, List<Product> items, String customerName) {}
This generates:
OrderFolds.items()→Fold<Order, Product>(focuses on all products)OrderFolds.orderId()→Fold<Order, String>(focuses on the order ID)OrderFolds.customerName()→Fold<Order, String>(focuses on customer name)
Step 2: The Core Fold Operations
A Fold<S, A> provides these essential query operations:
getAll(source): Extract All Focused Values
Returns a List<A> containing all the values the Fold focuses on.
Fold<Order, Product> itemsFold = OrderFolds.items();
Order order = new Order("ORD-123", List.of(
new Product("Laptop", 999.99, "Electronics", true),
new Product("Mouse", 25.00, "Electronics", true),
new Product("Desk", 350.00, "Furniture", false)
), "Alice");
List<Product> allProducts = itemsFold.getAll(order);
// Result: [Product[Laptop, 999.99, ...], Product[Mouse, 25.00, ...], Product[Desk, 350.00, ...]]
preview(source): Get the First Value
Returns an Optional<A> containing the first focused value, or Optional.empty() if none exist.
Optional<Product> firstProduct = itemsFold.preview(order);
// Result: Optional[Product[Laptop, 999.99, ...]]
Order emptyOrder = new Order("ORD-456", List.of(), "Bob");
Optional<Product> noProduct = itemsFold.preview(emptyOrder);
// Result: Optional.empty
find(predicate, source): Find First Matching Value
Returns an Optional<A> containing the first value that matches the predicate.
Optional<Product> expensiveProduct = itemsFold.find(
product -> product.price() > 500.00,
order
);
// Result: Optional[Product[Laptop, 999.99, ...]]
exists(predicate, source): Check If Any Match
Returns true if at least one focused value matches the predicate.
boolean hasOutOfStock = itemsFold.exists(
product -> !product.inStock(),
order
);
// Result: true (Desk is out of stock)
all(predicate, source): Check If All Match
Returns true if all focused values match the predicate (returns true for empty collections).
boolean allInStock = itemsFold.all(
product -> product.inStock(),
order
);
// Result: false (Desk is out of stock)
isEmpty(source): Check for Empty
Returns true if there are zero focused values.
boolean hasItems = !itemsFold.isEmpty(order);
// Result: true
length(source): Count Values
Returns the number of focused values as an int.
int itemCount = itemsFold.length(order);
// Result: 3
Step 2.5: Maybe-Based Fold Extensions
Higher-kinded-j provides extension methods that integrate Fold with the Maybe type, offering a more functional approach to handling absent values compared to Java's Optional. These extensions are available via static imports from FoldExtensions.
The Challenge: Working with Nullable Values
Standard Fold operations use Optional<A> for operations that might not find a value (like preview and find). While Optional works well, functional programming often prefers Maybe because it:
- Integrates seamlessly with Higher-Kinded Types (HKT)
- Works consistently with other monadic operations (
flatMap,map,fold) - Provides better composition with validation and error handling types
- Offers a more principled functional API
Think of Maybe as Optional's more functional cousin—they both represent "a value or nothing", but Maybe plays more nicely with the rest of the functional toolkit.
Think of Maybe-Based Extensions Like...
- A search that returns "found" or "not found" -
Maybeexplicitly models presence or absence - A safe lookup in a dictionary - Either you get the value wrapped in
Just, or you getNothing - A nullable pointer that can't cause NPE - You must explicitly check before unwrapping
- Optional's functional sibling - Same concept, better integration with functional patterns
The Three Extension Methods
All three methods are static imports from org.higherkindedj.optics.extensions.FoldExtensions:
import static org.higherkindedj.optics.extensions.FoldExtensions.*;
1. previewMaybe(fold, source) - Get First Value as Maybe
The previewMaybe method is the Maybe-based equivalent of preview(). It returns the first focused value wrapped in Maybe, or Maybe.nothing() if none exist.
import org.higherkindedj.hkt.maybe.Maybe;
import static org.higherkindedj.optics.extensions.FoldExtensions.previewMaybe;
Fold<Order, Product> itemsFold = OrderFolds.items();
Order order = new Order("ORD-123", List.of(
new Product("Laptop", 999.99, "Electronics", true),
new Product("Mouse", 25.00, "Electronics", true)
), "Alice");
Maybe<Product> firstProduct = previewMaybe(itemsFold, order);
// Result: Just(Product[Laptop, 999.99, ...])
Order emptyOrder = new Order("ORD-456", List.of(), "Bob");
Maybe<Product> noProduct = previewMaybe(itemsFold, emptyOrder);
// Result: Nothing
When to use previewMaybe vs preview:
- Use
previewMaybewhen working in a functional pipeline with otherMaybevalues - Use
previewwhen interoperating with standard Java code expectingOptional - Use
previewMaybewhen you need HKT compatibility for generic functional abstractions
2. findMaybe(fold, predicate, source) - Find First Match as Maybe
The findMaybe method is the Maybe-based equivalent of find(). It returns the first focused value matching the predicate, or Maybe.nothing() if no match is found.
import static org.higherkindedj.optics.extensions.FoldExtensions.findMaybe;
Fold<Order, Product> itemsFold = OrderFolds.items();
Maybe<Product> expensiveProduct = findMaybe(
itemsFold,
product -> product.price() > 500.00,
order
);
// Result: Just(Product[Laptop, 999.99, ...])
Maybe<Product> luxuryProduct = findMaybe(
itemsFold,
product -> product.price() > 5000.00,
order
);
// Result: Nothing
Common Use Cases:
- Product search: Find first available item matching criteria
- Validation: Locate the first invalid field in a form
- Configuration: Find the first matching configuration option
- Inventory: Locate first in-stock item in a category
3. getAllMaybe(fold, source) - Get All Values as Maybe-Wrapped List
The getAllMaybe method returns all focused values as Maybe<List<A>>. If the Fold finds at least one value, you get Just(List<A>). If it finds nothing, you get Nothing.
This is particularly useful when you want to distinguish between "found an empty collection" and "found no results".
import static org.higherkindedj.optics.extensions.FoldExtensions.getAllMaybe;
Fold<Order, Product> itemsFold = OrderFolds.items();
Maybe<List<Product>> allProducts = getAllMaybe(itemsFold, order);
// Result: Just([Product[Laptop, ...], Product[Mouse, ...]])
Order emptyOrder = new Order("ORD-456", List.of(), "Bob");
Maybe<List<Product>> noProducts = getAllMaybe(itemsFold, emptyOrder);
// Result: Nothing
When to use getAllMaybe vs getAll:
| Scenario | Use getAll() | Use getAllMaybe() |
|---|---|---|
| You need the list regardless of emptiness | ✅ Returns List<A> (possibly empty) | ❌ Overkill |
| You want to treat empty results as a failure case | ❌ Must check isEmpty() manually | ✅ Returns Nothing for empty results |
| You're chaining functional operations with Maybe | ❌ Requires conversion | ✅ Directly composable |
| Performance-critical batch processing | ✅ Direct list access | ❌ Extra Maybe wrapping |
Real-World Scenario: Product Search with Maybe
Here's a practical example showing how Maybe-based extensions simplify null-safe querying:
import org.higherkindedj.optics.Fold;
import org.higherkindedj.optics.annotations.GenerateFolds;
import org.higherkindedj.hkt.maybe.Maybe;
import static org.higherkindedj.optics.extensions.FoldExtensions.*;
@GenerateFolds
public record ProductCatalog(List<Product> products) {}
public class ProductSearchService {
private static final Fold<ProductCatalog, Product> ALL_PRODUCTS =
ProductCatalogFolds.products();
// Find the cheapest in-stock product in a category
public Maybe<Product> findCheapestInCategory(
ProductCatalog catalog,
String category
) {
return getAllMaybe(ALL_PRODUCTS, catalog)
.map(products -> products.stream()
.filter(p -> category.equals(p.category()))
.filter(Product::inStock)
.min(Comparator.comparing(Product::price))
.orElse(null)
)
.flatMap(Maybe::fromNullable); // Convert null to Nothing
}
// Get first premium product (>£1000)
public Maybe<Product> findPremiumProduct(ProductCatalog catalog) {
return findMaybe(
ALL_PRODUCTS,
product -> product.price() > 1000.00,
catalog
);
}
// Check if any products are available
public boolean hasAvailableProducts(ProductCatalog catalog) {
return getAllMaybe(ALL_PRODUCTS, catalog)
.map(products -> products.stream().anyMatch(Product::inStock))
.getOrElse(false);
}
// Extract all product names (or empty message)
public String getProductSummary(ProductCatalog catalog) {
return getAllMaybe(ALL_PRODUCTS, catalog)
.map(products -> products.stream()
.map(Product::name)
.collect(Collectors.joining(", "))
)
.getOrElse("No products available");
}
}
Optional vs Maybe: A Comparison
Understanding when to use each type helps you make informed decisions:
| Aspect | Optional<A> | Maybe<A> |
|---|---|---|
| Purpose | Standard Java optional values | Functional optional values with HKT support |
| Package | java.util.Optional | org.higherkindedj.hkt.maybe.Maybe |
| HKT Support | ❌ No | ✅ Yes (integrates with Kind<F, A>) |
| Monadic Operations | Limited (map, flatMap, filter) | Full (map, flatMap, filter, fold, getOrElse, etc.) |
| Java Interop | ✅ Native support | ❌ Requires conversion |
| Functional Composition | Basic | ✅ Excellent (works with Applicative, Monad, etc.) |
| Pattern Matching | ifPresent(), orElse() | isJust(), isNothing(), fold() |
| Use Cases | Standard Java APIs, interop | Functional pipelines, HKT abstractions |
| Conversion | Maybe.fromOptional(opt) | maybe.toOptional() |
Best Practice: Use Optional at API boundaries (public methods, external libraries) and Maybe internally in functional pipelines.
When to Use Each Extension Method
Here's a decision matrix to help you choose the right method:
Use previewMaybe when:
- You need the first value from a Fold
- You're working in a functional pipeline with other
Maybevalues - You want to chain operations (
map,flatMap,fold) on the result - You need HKT compatibility
// Example: Get first expensive product and calculate discount
Maybe<Double> discountedPrice = previewMaybe(productsFold, order)
.filter(p -> p.price() > 100)
.map(p -> p.price() * 0.9);
Use findMaybe when:
- You need to locate a specific value matching a predicate
- You want to avoid the verbosity of
getAll().stream().filter().findFirst() - You're building search functionality
- You want to short-circuit on the first match (performance)
// Example: Find first out-of-stock item
Maybe<Product> outOfStock = findMaybe(
productsFold,
p -> !p.inStock(),
order
);
Use getAllMaybe when:
- You want to treat empty results as a "nothing" case
- You want to chain functional operations on the entire result set
- You're building batch processing pipelines
- You need to propagate "nothing found" through your computation
// Example: Process all products or provide default behaviour
String report = getAllMaybe(productsFold, order)
.map(products -> generateReport(products))
.getOrElse("No products to report");
Integration with Existing Fold Operations
Maybe-based extensions work seamlessly alongside standard Fold operations. You can mix and match based on your needs:
Fold<Order, Product> itemsFold = OrderFolds.items();
// Standard Fold operations
List<Product> allItems = itemsFold.getAll(order); // Always returns list
Optional<Product> firstOpt = itemsFold.preview(order); // Optional-based
int count = itemsFold.length(order); // Primitive int
// Maybe-based extensions
Maybe<Product> firstMaybe = previewMaybe(itemsFold, order); // Maybe-based
Maybe<Product> matchMaybe = findMaybe(itemsFold, p -> ..., order); // Maybe-based
Maybe<List<Product>> allMaybe = getAllMaybe(itemsFold, order); // Maybe-wrapped list
Conversion Between Optional and Maybe:
// Convert Optional to Maybe
Optional<Product> optional = itemsFold.preview(order);
Maybe<Product> maybe = Maybe.fromOptional(optional);
// Convert Maybe to Optional
Maybe<Product> maybe = previewMaybe(itemsFold, order);
Optional<Product> optional = maybe.toOptional();
Performance Considerations
Maybe-based extensions have minimal overhead:
previewMaybe: Same performance aspreview(), just wraps inMaybeinstead ofOptionalfindMaybe: Identical tofind()- short-circuits on first matchgetAllMaybe: Adds one extraMaybewrapping overgetAll()- negligible cost
Optimisation Tip: For performance-critical code, prefer getAll() if you don't need the Maybe semantics. The extra wrapping and pattern matching adds a small but measurable cost in tight loops.
Practical Example: Safe Navigation with Maybe
Combining getAllMaybe with composed folds creates powerful null-safe query pipelines:
import org.higherkindedj.optics.Fold;
import org.higherkindedj.hkt.maybe.Maybe;
import static org.higherkindedj.optics.extensions.FoldExtensions.*;
@GenerateFolds
public record OrderHistory(List<Order> orders) {}
public class OrderAnalytics {
private static final Fold<OrderHistory, Order> ORDERS =
OrderHistoryFolds.orders();
private static final Fold<Order, Product> PRODUCTS =
OrderFolds.items();
// Calculate total revenue, handling empty history gracefully
public double calculateRevenue(OrderHistory history) {
return getAllMaybe(ORDERS, history)
.flatMap(orders -> {
List<Double> prices = orders.stream()
.flatMap(order -> getAllMaybe(PRODUCTS, order)
.map(products -> products.stream().map(Product::price))
.getOrElse(Stream.empty()))
.toList();
return prices.isEmpty() ? Maybe.nothing() : Maybe.just(prices);
})
.map(prices -> prices.stream().mapToDouble(Double::doubleValue).sum())
.getOrElse(0.0);
}
// Find most expensive product across all orders
public Maybe<Product> findMostExpensive(OrderHistory history) {
return getAllMaybe(ORDERS, history)
.flatMap(orders -> {
List<Product> allProducts = orders.stream()
.flatMap(order -> getAllMaybe(PRODUCTS, order)
.map(List::stream)
.getOrElse(Stream.empty()))
.toList();
return allProducts.isEmpty()
? Maybe.nothing()
: Maybe.fromNullable(allProducts.stream()
.max(Comparator.comparing(Product::price))
.orElse(null));
});
}
}
See FoldExtensionsExample.java for a runnable demonstration of all Maybe-based Fold extensions.
Step 3: Composing Folds for Deep Queries
Folds can be composed with other optics to create deep query paths. When composing with Lens, Prism, or other Fold instances, use andThen().
// Get all product names from all orders in history
Fold<OrderHistory, Order> historyToOrders = OrderHistoryFolds.orders();
Fold<Order, Product> orderToProducts = OrderFolds.items();
Lens<Product, String> productToName = ProductLenses.name();
Fold<OrderHistory, String> historyToAllProductNames =
historyToOrders
.andThen(orderToProducts)
.andThen(productToName.asFold());
OrderHistory history = new OrderHistory(List.of(order1, order2, order3));
List<String> allProductNames = historyToAllProductNames.getAll(history);
// Result: ["Laptop", "Mouse", "Desk", "Keyboard", "Monitor", ...]
Step 4: Aggregation with foldMap and Monoids
The most powerful feature of Fold is its ability to aggregate data using monoids. This is where Fold truly shines for combining values in flexible, reusable ways.
Understanding Monoids: The Simple Explanation
Think of a monoid as a recipe for combining things. It needs two ingredients:
- A starting value (called
empty) - like starting with 0 when adding numbers, or "" when joining strings - A combining rule (called
combine) - like "add these two numbers" or "concatenate these two strings"
Simple Examples:
- Adding numbers: Start with 0, combine by adding →
0 + 5 + 10 + 3 = 18 - Joining strings: Start with "", combine by concatenating →
"" + "Hello" + " " + "World" = "Hello World" - Finding maximum: Start with negative infinity, combine by taking larger value
- Checking all conditions: Start with
true, combine with AND (&&) → all must be true
The Power of foldMap
The foldMap method lets you:
- Transform each focused value into a "combinable" type
- Automatically merge all those values using a monoid
Example: Calculate Total Price
import org.higherkindedj.hkt.Monoid;
Fold<Order, Product> products = OrderFolds.items();
// Define how to combine prices (addition)
Monoid<Double> sumMonoid = new Monoid<>() {
@Override
public Double empty() { return 0.0; } // Start with zero
@Override
public Double combine(Double a, Double b) { return a + b; } // Add them
};
// Extract each product's price and sum them all
double totalPrice = products.foldMap(
sumMonoid,
product -> product.price(), // Extract price from each product
order
);
// Result: 1374.99 (999.99 + 25.00 + 350.00)
What's happening here?
- For each
Productin the order, extract itsprice→[999.99, 25.00, 350.00] - Start with
0.0(the empty value) - Combine them:
0.0 + 999.99 + 25.00 + 350.00 = 1374.99
Common Monoid Patterns
Here are the most useful monoid patterns for everyday use. Best Practice: Use the standard implementations from the Monoids utility class whenever possible:
import org.higherkindedj.hkt.Monoids;
// Standard monoids available out of the box:
Monoid<Double> sumDouble = Monoids.doubleAddition();
Monoid<Double> productDouble = Monoids.doubleMultiplication();
Monoid<Integer> sumInt = Monoids.integerAddition();
Monoid<Integer> productInt = Monoids.integerMultiplication();
Monoid<Long> sumLong = Monoids.longAddition();
Monoid<Boolean> andMonoid = Monoids.booleanAnd();
Monoid<Boolean> orMonoid = Monoids.booleanOr();
Monoid<String> stringConcat = Monoids.string();
Monoid<List<A>> listConcat = Monoids.list();
Monoid<Set<A>> setUnion = Monoids.set();
Monoid<Optional<A>> firstWins = Monoids.firstOptional();
Monoid<Optional<A>> lastWins = Monoids.lastOptional();
Monoid<Optional<A>> maxValue = Monoids.maximum();
Monoid<Optional<A>> minValue = Monoids.minimum();
Sum (Adding Numbers)
// Use standard monoid from Monoids class
Monoid<Double> sumMonoid = Monoids.doubleAddition();
// Calculate total revenue
double revenue = productsFold.foldMap(sumMonoid, ProductItem::price, order);
Product (Multiplying Numbers)
Monoid<Double> productMonoid = Monoids.doubleMultiplication();
// Calculate compound discount (e.g., 0.9 * 0.95 * 0.85)
double finalMultiplier = discountsFold.foldMap(productMonoid, d -> d, discounts);
String Concatenation
Monoid<String> stringMonoid = Monoids.string();
// Join all product names
String allNames = productsFold.foldMap(stringMonoid, ProductItem::name, order);
List Accumulation
Monoid<List<String>> listMonoid = Monoids.list();
// Collect all categories (with duplicates)
List<String> categories = productsFold.foldMap(listMonoid,
p -> List.of(p.category()), order);
Boolean AND (All Must Be True)
Monoid<Boolean> andMonoid = Monoids.booleanAnd();
// Check if all products are in stock
boolean allInStock = productsFold.foldMap(andMonoid, ProductItem::inStock, order);
Boolean OR (Any Can Be True)
Monoid<Boolean> orMonoid = Monoids.booleanOr();
// Check if any product is expensive
boolean hasExpensive = productsFold.foldMap(orMonoid,
p -> p.price() > 1000.0, order);
Maximum Value
// Use Optional-based maximum from Monoids
Monoid<Optional<Double>> maxMonoid = Monoids.maximum();
// Find highest price (returns Optional to handle empty collections)
Optional<Double> maxPrice = productsFold.foldMap(maxMonoid,
p -> Optional.of(p.price()), order);
// Or create a custom one for raw doubles:
Monoid<Double> rawMaxMonoid = new Monoid<>() {
@Override public Double empty() { return Double.NEGATIVE_INFINITY; }
@Override public Double combine(Double a, Double b) { return Math.max(a, b); }
};
double maxPriceRaw = productsFold.foldMap(rawMaxMonoid, ProductItem::price, order);
Why Monoids Matter
Monoids give you:
- Composability: Combine complex aggregations from simple building blocks
- Reusability: Define a monoid once, use it everywhere
- Correctness: The monoid laws guarantee consistent behaviour
- Flexibility: Create custom aggregations for your domain
Pro Tip: You can create custom monoids for any domain-specific aggregation logic, like calculating weighted averages, combining validation results, or merging configuration objects.
When to Use Folds vs Other Approaches
Use Fold When:
- Read-only queries - You only need to extract or check data
- Intent matters - You want to express "this is a query, not a modification"
- Composable searches - Building reusable query paths
- Aggregations - Using monoids for custom combining logic
- CQRS patterns - Separating queries from commands
// Perfect for read-only analysis
Fold<OrderHistory, Product> allProducts =
OrderHistoryFolds.orders()
.andThen(OrderFolds.items());
boolean hasElectronics = allProducts.exists(
p -> "Electronics".equals(p.category()),
history
);
Use Traversal When:
- Modifications needed - You need to update the data
- Effectful updates - Using
modifyFfor validation or async operations - Bulk transformations - Changing multiple values at once
// Use Traversal for modifications
Traversal<Order, Product> productTraversal = OrderTraversals.items();
Order discountedOrder = Traversals.modify(
productTraversal.andThen(ProductLenses.price().asTraversal()),
price -> price * 0.9,
order
);
Use Stream API When:
- Complex filtering - Multiple filter/map/reduce operations
- Parallel processing - Taking advantage of parallel streams
- Standard Java collections - Working with flat collections
- Stateful operations - Operations that require maintaining state
// Better with streams for complex pipelines
List<String> topExpensiveItems = order.items().stream()
.filter(p -> p.price() > 100)
.sorted(Comparator.comparing(Product::price).reversed())
.limit(5)
.map(Product::name)
.collect(toList());
Use Direct Field Access When:
- Simple cases - Single, straightforward field read
- Performance critical - Minimal abstraction overhead
- One-off operations - Not building reusable logic
// Just use direct access for simple cases
String customerName = order.customerName();
Common Pitfalls
❌ Don't Do This:
// Inefficient: Creating folds repeatedly in loops
for (Order order : orders) {
Fold<Order, Product> fold = OrderFolds.items();
List<Product> products = fold.getAll(order);
// ... process products
}
// Over-engineering: Using Fold for trivial single-field access
Fold<Order, String> customerFold = OrderFolds.customerName();
String name = customerFold.getAll(order).get(0); // Just use order.customerName()!
// Wrong tool: Trying to modify data with a Fold
// Folds are read-only - this won't compile
// Fold<Order, Product> items = OrderFolds.items();
// Order updated = items.set(newProduct, order); // ❌ No 'set' method!
// Verbose: Unnecessary conversion when Traversal is already available
Traversal<Order, Product> traversal = OrderTraversals.items();
Fold<Order, Product> fold = traversal.asFold();
List<Product> products = fold.getAll(order); // Just use Traversals.getAll() directly!
✅ Do This Instead:
// Efficient: Create fold once, reuse many times
Fold<Order, Product> itemsFold = OrderFolds.items();
for (Order order : orders) {
List<Product> products = itemsFold.getAll(order);
// ... process products
}
// Right tool: Direct access for simple cases
String name = order.customerName();
// Clear intent: Use Traversal when you need modifications
Traversal<Order, Product> itemsTraversal = OrderTraversals.items();
Order updated = Traversals.modify(itemsTraversal, this::applyDiscount, order);
// Clear purpose: Use Fold when expressing query intent
Fold<Order, Product> queryItems = OrderFolds.items();
boolean hasExpensive = queryItems.exists(p -> p.price() > 1000, order);
Performance Notes
Folds are optimised for query operations:
- Memory efficient: Uses iterators internally, no intermediate collections for most operations
- Lazy evaluation: Short-circuits on operations like
findandexists(stops at first match) - Reusable: Composed folds can be stored and reused across your application
- Type-safe: All operations checked at compile time
- Zero allocation:
foldMapwith monoids avoids creating intermediate collections
Best Practice: For frequently used query paths, create them once and store as constants:
public class OrderQueries {
public static final Fold<OrderHistory, Product> ALL_PRODUCTS =
OrderHistoryFolds.orders()
.andThen(OrderFolds.items());
public static final Fold<OrderHistory, Double> ALL_PRICES =
ALL_PRODUCTS.andThen(ProductLenses.price().asFold());
public static final Fold<Order, Product> ELECTRONICS =
OrderFolds.items(); // Can filter with exists/find/getAll + stream filter
}
Real-World Example: Order Analytics
Here's a practical example showing comprehensive use of Fold for business analytics:
import org.higherkindedj.optics.Fold;
import org.higherkindedj.optics.Lens;
import org.higherkindedj.optics.annotations.GenerateFolds;
import org.higherkindedj.optics.annotations.GenerateLenses;
import org.higherkindedj.hkt.Monoid;
import java.time.LocalDate;
import java.util.*;
@GenerateLenses
@GenerateFolds
public record Product(String name, double price, String category, boolean inStock) {}
@GenerateLenses
@GenerateFolds
public record Order(String orderId, List<Product> items, String customerName, LocalDate orderDate) {}
@GenerateLenses
@GenerateFolds
public record OrderHistory(List<Order> orders) {}
public class OrderAnalytics {
private static final Fold<Order, Product> ORDER_ITEMS = OrderFolds.items();
private static final Fold<OrderHistory, Order> HISTORY_ORDERS = OrderHistoryFolds.orders();
private static final Fold<OrderHistory, Product> ALL_PRODUCTS =
HISTORY_ORDERS.andThen(ORDER_ITEMS);
private static final Monoid<Double> SUM_MONOID = new Monoid<>() {
@Override public Double empty() { return 0.0; }
@Override public Double combine(Double a, Double b) { return a + b; }
};
// Calculate total revenue across all orders
public static double calculateRevenue(OrderHistory history) {
return ALL_PRODUCTS.foldMap(SUM_MONOID, Product::price, history);
}
// Find most expensive product across all orders
public static Optional<Product> findMostExpensiveProduct(OrderHistory history) {
return ALL_PRODUCTS.getAll(history).stream()
.max(Comparator.comparing(Product::price));
}
// Check if any order has out-of-stock items
public static boolean hasOutOfStockIssues(OrderHistory history) {
return ALL_PRODUCTS.exists(p -> !p.inStock(), history);
}
// Get all unique categories
public static Set<String> getAllCategories(OrderHistory history) {
Fold<OrderHistory, String> categories =
ALL_PRODUCTS.andThen(ProductLenses.category().asFold());
return new HashSet<>(categories.getAll(history));
}
// Count products in a specific category
public static int countByCategory(OrderHistory history, String category) {
return (int) ALL_PRODUCTS.getAll(history).stream()
.filter(p -> category.equals(p.category()))
.count();
}
// Calculate average order value
public static double calculateAverageOrderValue(OrderHistory history) {
List<Order> allOrders = HISTORY_ORDERS.getAll(history);
if (allOrders.isEmpty()) return 0.0;
double totalRevenue = calculateRevenue(history);
return totalRevenue / allOrders.size();
}
// Find orders with specific product
public static List<Order> findOrdersContaining(OrderHistory history, String productName) {
return HISTORY_ORDERS.getAll(history).stream()
.filter(order -> ORDER_ITEMS.exists(
p -> productName.equals(p.name()),
order
))
.toList();
}
}
The Relationship to Foldable
Quick Summary
If you're just getting started, here's what you need to know: A Fold<S, A> is closely related to the Foldable type class from functional programming. While Foldable<F> works with any container type F (like List, Optional, Tree), a Fold<S, A> lets you treat any structure S as if it were a foldable container of A values—even when S isn't actually a collection.
Key Connection: Both use foldMap to aggregate values using monoids. The Fold optic brings this powerful abstraction to arbitrary data structures, not just collections.
In-Depth Explanation
For those familiar with functional programming or interested in the deeper theory:
The Foldable Type Class
The Foldable<F> type class in higher-kinded-j represents any data structure F that can be "folded up" or reduced to a summary value. It's defined with this signature:
public interface Foldable<F> {
<A, M> M foldMap(
Monoid<M> monoid,
Function<? super A, ? extends M> f,
Kind<F, A> fa
);
}
Common instances include:
List<A>- fold over all elementsOptional<A>- fold over zero or one elementEither<E, A>- fold over the right value if presentTree<A>- fold over all nodes in a tree
How Fold Relates to Foldable
A Fold<S, A> can be thought of as a first-class, composable lens into a Foldable structure. More precisely:
- Virtualization:
Fold<S, A>lets you "view" any structureSas a virtualFoldablecontainer ofAvalues, even ifSis not inherently a collection - Composition: Unlike
Foldable<F>, which is fixed to a specific container typeF,Fold<S, A>can be composed with other optics to create deep query paths - Reification: A
Foldreifies (makes concrete) the act of folding, turning it into a first-class value you can pass around, store, and combine
Example Comparison:
// Using Foldable directly on a List
Foldable<ListKind.Witness> listFoldable = ListTraverse.INSTANCE;
List<Integer> numbers = List.of(1, 2, 3, 4, 5);
int sum = listFoldable.foldMap(sumMonoid, Function.identity(), LIST.widen(numbers));
// Using a Fold optic to query nested structure
Fold<Order, Integer> quantities = OrderFolds.items()
.andThen(ProductLenses.quantity().asFold());
int totalQuantity = quantities.foldMap(sumMonoid, Function.identity(), order);
The Fold optic gives you the power of Foldable, but for arbitrary access paths through your domain model, not just direct containers.
Fold Laws and Foldable Laws
Both Fold and Foldable obey the same monoid laws:
- Left identity:
combine(empty, x) = x - Right identity:
combine(x, empty) = x - Associativity:
combine(combine(x, y), z) = combine(x, combine(y, z))
This means foldMap produces consistent, predictable results regardless of how the fold is internally structured.
Practical Implications
Understanding this relationship helps you:
- Transfer knowledge: If you learn
Foldable, you understand the core ofFold - Recognise patterns: Monoid aggregation is universal across both abstractions
- Build intuition: A
Foldis like having a customFoldableinstance for each access path in your domain - Compose freely: You can convert between optics and type classes when needed (e.g.,
Lens.asFold())
Further Reading:
- Foldable and Traverse in higher-kinded-j - Deep dive into the type class
- Haskell Lens Library - Folds - The original inspiration
- Optics By Example (Book) - Comprehensive treatment of folds in Haskell
Complete, Runnable Example
This example demonstrates all major Fold operations in a single, cohesive application:
package org.higherkindedj.example.optics;
import org.higherkindedj.optics.Fold;
import org.higherkindedj.optics.Lens;
import org.higherkindedj.optics.annotations.GenerateFolds;
import org.higherkindedj.optics.annotations.GenerateLenses;
import org.higherkindedj.hkt.Monoid;
import org.higherkindedj.hkt.Monoids;
import java.util.*;
public class FoldUsageExample {
@GenerateLenses
@GenerateFolds
public record ProductItem(String name, double price, String category, boolean inStock) {}
@GenerateLenses
@GenerateFolds
public record Order(String orderId, List<ProductItem> items, String customerName) {}
@GenerateLenses
@GenerateFolds
public record OrderHistory(List<Order> orders) {}
public static void main(String[] args) {
// Create sample data
var order1 = new Order("ORD-001", List.of(
new ProductItem("Laptop", 999.99, "Electronics", true),
new ProductItem("Mouse", 25.00, "Electronics", true),
new ProductItem("Desk", 350.00, "Furniture", false)
), "Alice");
var order2 = new Order("ORD-002", List.of(
new ProductItem("Keyboard", 75.00, "Electronics", true),
new ProductItem("Monitor", 450.00, "Electronics", true),
new ProductItem("Chair", 200.00, "Furniture", true)
), "Bob");
var history = new OrderHistory(List.of(order1, order2));
System.out.println("=== FOLD USAGE EXAMPLE ===\n");
// --- SCENARIO 1: Basic Query Operations ---
System.out.println("--- Scenario 1: Basic Query Operations ---");
Fold<Order, ProductItem> itemsFold = OrderFolds.items();
List<ProductItem> allItems = itemsFold.getAll(order1);
System.out.println("All items: " + allItems.size() + " products");
Optional<ProductItem> firstItem = itemsFold.preview(order1);
System.out.println("First item: " + firstItem.map(ProductItem::name).orElse("none"));
int count = itemsFold.length(order1);
System.out.println("Item count: " + count);
boolean isEmpty = itemsFold.isEmpty(order1);
System.out.println("Is empty: " + isEmpty + "\n");
// --- SCENARIO 2: Conditional Queries ---
System.out.println("--- Scenario 2: Conditional Queries ---");
boolean hasOutOfStock = itemsFold.exists(p -> !p.inStock(), order1);
System.out.println("Has out of stock items: " + hasOutOfStock);
boolean allInStock = itemsFold.all(ProductItem::inStock, order1);
System.out.println("All items in stock: " + allInStock);
Optional<ProductItem> expensiveItem = itemsFold.find(p -> p.price() > 500, order1);
System.out.println("First expensive item: " + expensiveItem.map(ProductItem::name).orElse("none") + "\n");
// --- SCENARIO 3: Composition ---
System.out.println("--- Scenario 3: Composed Folds ---");
Fold<OrderHistory, ProductItem> allProducts =
OrderHistoryFolds.orders().andThen(OrderFolds.items());
List<ProductItem> allProductsFromHistory = allProducts.getAll(history);
System.out.println("Total products across all orders: " + allProductsFromHistory.size());
Fold<OrderHistory, String> allCategories =
allProducts.andThen(ProductItemLenses.category().asFold());
Set<String> uniqueCategories = new HashSet<>(allCategories.getAll(history));
System.out.println("Unique categories: " + uniqueCategories + "\n");
// --- SCENARIO 4: Monoid Aggregation ---
System.out.println("--- Scenario 4: Monoid-Based Aggregation ---");
// Use standard monoids from Monoids utility class
Monoid<Double> sumMonoid = Monoids.doubleAddition();
double orderTotal = itemsFold.foldMap(sumMonoid, ProductItem::price, order1);
System.out.println("Order 1 total: £" + String.format("%.2f", orderTotal));
double historyTotal = allProducts.foldMap(sumMonoid, ProductItem::price, history);
System.out.println("All orders total: £" + String.format("%.2f", historyTotal));
// Boolean AND monoid for checking conditions
Monoid<Boolean> andMonoid = Monoids.booleanAnd();
boolean allAffordable = itemsFold.foldMap(andMonoid, p -> p.price() < 1000, order1);
System.out.println("All items under £1000: " + allAffordable);
// Boolean OR monoid for checking any condition
Monoid<Boolean> orMonoid = Monoids.booleanOr();
boolean hasElectronics = allProducts.foldMap(orMonoid,
p -> "Electronics".equals(p.category()), history);
System.out.println("Has electronics: " + hasElectronics + "\n");
// --- SCENARIO 5: Analytics ---
System.out.println("--- Scenario 5: Real-World Analytics ---");
// Most expensive product
Optional<ProductItem> mostExpensive = allProducts.getAll(history).stream()
.max(Comparator.comparing(ProductItem::price));
System.out.println("Most expensive product: " +
mostExpensive.map(p -> p.name() + " (£" + p.price() + ")").orElse("none"));
// Average price
List<ProductItem> allProds = allProducts.getAll(history);
double avgPrice = allProds.isEmpty() ? 0.0 :
historyTotal / allProds.size();
System.out.println("Average product price: £" + String.format("%.2f", avgPrice));
// Count by category
long electronicsCount = allProducts.getAll(history).stream()
.filter(p -> "Electronics".equals(p.category()))
.count();
System.out.println("Electronics count: " + electronicsCount);
System.out.println("\n=== END OF EXAMPLE ===");
}
}
Expected Output:
=== FOLD USAGE EXAMPLE ===
--- Scenario 1: Basic Query Operations ---
All items: 3 products
First item: Laptop
Item count: 3
Is empty: false
--- Scenario 2: Conditional Queries ---
Has out of stock items: true
All items in stock: false
First expensive item: Laptop
--- Scenario 3: Composed Folds ---
Total products across all orders: 6
Unique categories: [Electronics, Furniture]
--- Scenario 4: Monoid-Based Aggregation ---
Order 1 total: £1374.99
All orders total: £2099.99
All items under £1000: true
Has electronics: true
--- Scenario 5: Real-World Analytics ---
Most expensive product: Laptop (£999.99)
Average product price: £349.99
Electronics count: 4
=== END OF EXAMPLE ===
Why Folds Are Essential
Fold completes the optics toolkit by providing:
- Clear Intent: Explicitly read-only operations prevent accidental modifications
- Composability: Chain folds with other optics for deep queries
- Aggregation Power: Use monoids for flexible, reusable combining logic
- Type Safety: All queries checked at compile time
- Reusability: Build query libraries tailored to your domain
- CQRS Support: Separate query models from command models cleanly
- Performance: Optimised for read-only access with short-circuiting and lazy evaluation
By adding Fold to your arsenal alongside Lens, Prism, Iso, and Traversal, you have complete coverage for both reading and writing immutable data structures in a type-safe, composable way.
The key insight: Folds make queries first-class citizens in your codebase, just as valuable and well-designed as the commands that modify state.
Previous: Traversals: Handling Bulk Updates Next: Filtered Optics: Predicate-Based Composition
Filtered Optics: Predicate-Based Composition
Declarative Filtering for Targeted Operations
- How to filter elements within traversals and folds using predicates
- Using
filtered()for declarative, composable filtering as part of optic composition - The difference between filtering during modification vs filtering during queries
- Advanced filtering with
filterBy()for query-based predicates - The static
Traversals.filtered()combinator for affine traversals - Understanding lazy evaluation semantics (preserved structure vs excluded queries)
- When to use filtered optics vs Stream API vs conditional logic
- Real-world patterns for customer segmentation, inventory management, and analytics
In our journey through optics, we've seen how Traversal handles bulk operations on collections and how Fold provides read-only queries. But what happens when you need to operate on only some elements—those that satisfy a specific condition?
Traditionally, filtering requires breaking out of your optic composition to use streams or loops, mixing the what (your transformation logic) with the how (iteration and filtering). Filtered optics solve this elegantly by making filtering a first-class part of your optic composition.
The Scenario: Customer Segmentation in a SaaS Platform
Imagine you're building a Software-as-a-Service platform where you need to:
- Grant bonuses only to active users
- Send notifications to users with overdue invoices
- Analyse spending patterns for customers with high-value orders
- Update pricing only for products in specific categories
The Data Model:
@GenerateLenses
public record User(String name, boolean active, int score, SubscriptionTier tier) {
User grantBonus() {
return new User(name, active, score + 100, tier);
}
}
@GenerateLenses
@GenerateFolds
public record Invoice(String id, double amount, boolean overdue) {}
@GenerateLenses
@GenerateFolds
public record Customer(String name, List<Invoice> invoices, SubscriptionTier tier) {}
@GenerateLenses
@GenerateFolds
public record Platform(List<User> users, List<Customer> customers) {}
public enum SubscriptionTier { FREE, BASIC, PREMIUM, ENTERPRISE }
The Traditional Approach:
// Verbose: Manual filtering breaks optic composition
List<User> updatedUsers = platform.users().stream()
.map(user -> user.active() ? user.grantBonus() : user)
.collect(Collectors.toList());
Platform updatedPlatform = new Platform(updatedUsers, platform.customers());
// Even worse with nested structures
List<Customer> customersWithOverdue = platform.customers().stream()
.filter(customer -> customer.invoices().stream()
.anyMatch(Invoice::overdue))
.collect(Collectors.toList());
This approach forces you to abandon the declarative power of optics, manually managing iteration and reconstruction. Filtered optics let you express this intent directly within your optic composition.
Think of Filtered Optics Like...
- A SQL WHERE clause:
SELECT * FROM users WHERE active = true - A spotlight with a mask: Illuminates only the items that match your criteria
- A sieve: Allows matching elements to pass through whilst blocking others
- A conditional lens: Focuses only on elements satisfying a predicate
- A smart selector: Like CSS selectors that target specific elements based on attributes
The key insight: filtering becomes part of your optic's identity, not an external operation applied afterwards.
Three Ways to Filter
Higher-kinded-j provides three complementary approaches to filtered optics:
| Approach | Signature | Use Case |
|---|---|---|
| Instance method | traversal.filtered(predicate) | Filter within an existing traversal |
| Static combinator | Traversals.filtered(predicate) | Create a reusable affine traversal |
| Query-based filter | traversal.filterBy(fold, predicate) | Filter based on nested properties |
Each serves different needs, and they can be combined for powerful compositions.
A Step-by-Step Walkthrough
Step 1: Instance Method — filtered(Predicate)
The most intuitive approach: call filtered() on any Traversal or Fold to create a new optic that only focuses on matching elements.
On Traversals (Read + Write)
// Create a traversal for all users
Traversal<List<User>, User> allUsers = Traversals.forList();
// Filter to active users only
Traversal<List<User>, User> activeUsers = allUsers.filtered(User::active);
// Grant bonus ONLY to active users
List<User> result = Traversals.modify(activeUsers, User::grantBonus, users);
// Active users get bonus; inactive users preserved unchanged
// Extract ONLY active users
List<User> actives = Traversals.getAll(activeUsers, users);
// Returns only those matching the predicate
Critical Semantic: During modification, non-matching elements are preserved unchanged in the structure. During queries (like getAll), they are excluded from the results. This preserves the overall structure whilst focusing operations on the subset you care about.
On Folds (Read-Only)
// Fold from Order to Items
Fold<Order, Item> itemsFold = Fold.of(Order::items);
// Filter to expensive items only
Fold<Order, Item> expensiveItems = itemsFold.filtered(item -> item.price() > 100);
// Query operations work on filtered subset
int count = expensiveItems.length(order); // Count expensive items
List<Item> expensive = expensiveItems.getAll(order); // Get expensive items
double total = expensiveItems.foldMap(sumMonoid, Item::price, order); // Sum expensive
boolean allPremium = expensiveItems.all(Item::isPremium, order); // Check expensive items
Step 2: Composing Filtered Traversals
The real power emerges when you compose filtered optics with other optics:
// Compose: list → filtered users → user name
Traversal<List<User>, String> activeUserNames =
Traversals.<User>forList()
.filtered(User::active)
.andThen(UserLenses.name().asTraversal());
List<User> users = List.of(
new User("alice", true, 100, PREMIUM),
new User("bob", false, 200, FREE),
new User("charlie", true, 150, BASIC)
);
// Get only active user names
List<String> names = Traversals.getAll(activeUserNames, users);
// Result: ["alice", "charlie"]
// Uppercase only active user names
List<User> result = Traversals.modify(activeUserNames, String::toUpperCase, users);
// Result: [User("ALICE", true, 100), User("bob", false, 200), User("CHARLIE", true, 150)]
// Notice: bob remains unchanged because he's inactive
Step 3: Chaining Multiple Filters
Filters can be chained to create complex predicates:
// Active users with high scores (AND logic)
Traversal<List<User>, User> activeHighScorers =
Traversals.<User>forList()
.filtered(User::active)
.filtered(user -> user.score() > 120);
// Premium or Enterprise tier users
Traversal<List<User>, User> premiumUsers =
Traversals.<User>forList()
.filtered(user -> user.tier() == PREMIUM || user.tier() == ENTERPRISE);
Step 4: Static Combinator — Traversals.filtered()
The static method creates an affine traversal (0 or 1 focus) that can be composed anywhere in a chain:
// Create a reusable filter
Traversal<User, User> activeFilter = Traversals.filtered(User::active);
// Use standalone
User user = new User("Alice", true, 100, BASIC);
User result = Traversals.modify(activeFilter, User::grantBonus, user);
// If active, grants bonus; otherwise returns unchanged
// Compose into a pipeline
Traversal<List<User>, String> activeUserNames =
Traversals.<User>forList()
.andThen(Traversals.filtered(User::active)) // Static combinator
.andThen(UserLenses.name().asTraversal());
When to use the static combinator vs instance method:
- Static combinator: When you want a reusable filter that can be inserted into different compositions
- Instance method: When filtering is a natural part of a specific traversal's behaviour
Both approaches are semantically equivalent—choose based on readability and reusability:
// These are equivalent:
Traversal<List<User>, User> approach1 = Traversals.<User>forList().filtered(User::active);
Traversal<List<User>, User> approach2 = Traversals.<User>forList().andThen(Traversals.filtered(User::active));
Step 5: Advanced Filtering — filterBy(Fold, Predicate)
Sometimes you need to filter based on nested properties or aggregated queries. The filterBy method accepts a Fold that queries each element, including only those where at least one queried value matches the predicate.
Example: Customers with Overdue Invoices
Traversal<List<Customer>, Customer> allCustomers = Traversals.forList();
Fold<Customer, Invoice> customerInvoices = Fold.of(Customer::invoices);
// Filter customers who have ANY overdue invoice
Traversal<List<Customer>, Customer> customersWithOverdue =
allCustomers.filterBy(customerInvoices, Invoice::overdue);
// Update tier for customers with overdue invoices
Lens<Customer, SubscriptionTier> tierLens = CustomerLenses.tier();
List<Customer> updated = Traversals.modify(
customersWithOverdue.andThen(tierLens.asTraversal()),
tier -> SubscriptionTier.BASIC, // Downgrade tier
customers
);
Example: Orders with High-Value Items
Traversal<List<Order>, Order> allOrders = Traversals.forList();
Fold<Order, Item> orderItems = Fold.of(Order::items);
// Orders containing at least one item over £500
Traversal<List<Order>, Order> highValueOrders =
allOrders.filterBy(orderItems, item -> item.price() > 500);
List<Order> result = Traversals.getAll(highValueOrders, orders);
// Returns orders that have at least one expensive item
Example: Using Composed Folds
Traversal<List<Customer>, Customer> allCustomers = Traversals.forList();
Fold<Customer, Order> customerOrders = Fold.of(Customer::orders);
Fold<Order, Item> orderItems = Fold.of(Order::items);
// Fold from Customer to all their Items (across all orders)
Fold<Customer, Item> customerItems = customerOrders.andThen(orderItems);
// Customers who have purchased any premium product
Traversal<List<Customer>, Customer> premiumBuyers =
allCustomers.filterBy(customerItems, Item::isPremium);
// Mark them as VIP
Lens<Customer, String> nameLens = CustomerLenses.name();
Traversal<List<Customer>, String> premiumBuyerNames =
premiumBuyers.andThen(nameLens.asTraversal());
List<Customer> result = Traversals.modify(
premiumBuyerNames,
name -> name + " [VIP]",
customers
);
Understanding the Semantics: Preserved vs Excluded
A crucial aspect of filtered optics is understanding what happens to non-matching elements:
| Operation | Non-Matching Elements |
|---|---|
modify / modifyF | Preserved unchanged in the structure |
getAll | Excluded from results |
foldMap / exists / all | Excluded from aggregation |
length | Not counted |
Visual Example:
List<User> users = List.of(
new User("Alice", true, 100), // active
new User("Bob", false, 200), // inactive
new User("Charlie", true, 150) // active
);
Traversal<List<User>, User> activeUsers = forList().filtered(User::active);
// MODIFY: Structure preserved, only matching modified
List<User> modified = Traversals.modify(activeUsers, User::grantBonus, users);
// [User(Alice, true, 200), User(Bob, false, 200), User(Charlie, true, 250)]
// ↑ modified ↑ UNCHANGED ↑ modified
// QUERY: Only matching elements returned
List<User> gotten = Traversals.getAll(activeUsers, users);
// [User(Alice, true, 100), User(Charlie, true, 150)]
// Bob is EXCLUDED entirely
This behaviour is intentional: it allows you to transform selectively whilst maintaining referential integrity, and query selectively without polluting results.
When to Use Filtered Optics vs Other Approaches
Use Filtered Optics When:
- Declarative composition - You want filtering to be part of the optic's definition
- Selective modifications - Modify only elements matching criteria
- Reusable filters - Define once, compose everywhere
- Type-safe pipelines - Filter as part of a larger optic chain
- Intent clarity - Express "active users" as a single concept
// Perfect: Declarative, composable, reusable
Traversal<Platform, User> activeEnterpriseUsers =
PlatformTraversals.users()
.filtered(User::active)
.filtered(user -> user.tier() == ENTERPRISE);
Platform updated = Traversals.modify(activeEnterpriseUsers, User::grantBonus, platform);
Use Stream API When:
- Complex transformations - Multiple map/filter/reduce operations
- Collecting to different structures - Need to change the collection type
- Statistical operations - Sorting, limiting, grouping
- One-off queries - Not building reusable logic
// Better with streams: Complex pipeline with sorting and limiting
List<String> topActiveUserNames = users.stream()
.filter(User::active)
.sorted(Comparator.comparing(User::score).reversed())
.limit(10)
.map(User::name)
.collect(toList());
Use Conditional Logic When:
- Control flow - Early returns, exceptions, complex branching
- Side effects - Logging, metrics, external calls based on conditions
- Performance critical - Minimal abstraction overhead needed
// Sometimes explicit logic is clearest
for (User user : users) {
if (user.active() && user.score() < 0) {
throw new IllegalStateException("Active user with negative score: " + user);
}
}
Common Pitfalls
❌ Don't Do This:
// Inefficient: Recreating filtered traversals in loops
for (Platform platform : platforms) {
var activeUsers = Traversals.<User>forList().filtered(User::active);
Traversals.modify(activeUsers, User::grantBonus, platform.users());
}
// Confusing: Mixing filtering approaches
List<User> activeUsers = Traversals.getAll(userTraversal, users).stream()
.filter(User::active) // Filtering AFTER optic extraction defeats the purpose
.collect(toList());
// Wrong mental model: Expecting structure change
Traversal<List<User>, User> active = forList().filtered(User::active);
List<User> result = Traversals.modify(active, User::grantBonus, users);
// result still has same LENGTH as users! Non-matching preserved, not removed
// Over-engineering: Filtering for trivial cases
Fold<User, Boolean> isActiveFold = UserLenses.active().asFold();
boolean active = isActiveFold.getAll(user).get(0); // Just use user.active()!
✅ Do This Instead:
// Efficient: Create filtered optic once, reuse many times
Traversal<List<User>, User> activeUsers = Traversals.<User>forList().filtered(User::active);
for (Platform platform : platforms) {
Traversals.modify(activeUsers, User::grantBonus, platform.users());
}
// Clear: Filter is part of the optic definition
Traversal<List<User>, User> activeUsers = forList().filtered(User::active);
List<User> result = Traversals.getAll(activeUsers, users);
// Returns only active users
// Correct expectation: Use getAll for extraction, modify for transformation
List<User> onlyActives = Traversals.getAll(activeUsers, users); // Filters results
List<User> allWithActivesBonused = Traversals.modify(activeUsers, User::grantBonus, users); // Preserves structure
// Simple: Use direct access for trivial cases
boolean isActive = user.active();
Performance Notes
Filtered optics are optimised for efficiency:
- Lazy evaluation: The predicate is only called when needed
- Short-circuiting: Operations like
existsandfindstop at first match - No intermediate collections: Filtering happens during traversal, not before
- Structural sharing: Unmodified parts of the structure are reused
- Single pass: Both filtering and transformation occur in one traversal
Best Practice: Store frequently-used filtered traversals as constants:
public class PlatformOptics {
public static final Traversal<Platform, User> ACTIVE_USERS =
PlatformTraversals.users().filtered(User::active);
public static final Traversal<Platform, User> PREMIUM_ACTIVE_USERS =
ACTIVE_USERS.filtered(user -> user.tier() == PREMIUM);
public static final Traversal<Platform, Customer> CUSTOMERS_WITH_OVERDUE =
PlatformTraversals.customers()
.filterBy(CustomerFolds.invoices(), Invoice::overdue);
public static final Fold<Platform, Invoice> ALL_OVERDUE_INVOICES =
PlatformFolds.customers()
.andThen(CustomerFolds.invoices())
.filtered(Invoice::overdue);
}
Real-World Example: Customer Analytics Dashboard
Here's a comprehensive example demonstrating filtered optics in a business context:
package org.higherkindedj.example.optics;
import org.higherkindedj.optics.*;
import org.higherkindedj.optics.util.Traversals;
import org.higherkindedj.hkt.Monoids;
import java.util.*;
public class CustomerAnalytics {
public record Item(String name, int price, String category, boolean premium) {}
public record Order(String id, List<Item> items, double total) {}
public record Customer(String name, List<Order> orders, boolean vip) {}
// Reusable optics
private static final Fold<Customer, Order> CUSTOMER_ORDERS = Fold.of(Customer::orders);
private static final Fold<Order, Item> ORDER_ITEMS = Fold.of(Order::items);
private static final Fold<Customer, Item> ALL_CUSTOMER_ITEMS =
CUSTOMER_ORDERS.andThen(ORDER_ITEMS);
public static void main(String[] args) {
List<Customer> customers = createSampleData();
System.out.println("=== CUSTOMER ANALYTICS WITH FILTERED OPTICS ===\n");
// --- Analysis 1: High-Value Customer Identification ---
System.out.println("--- Analysis 1: High-Value Customers ---");
Traversal<List<Customer>, Customer> allCustomers = Traversals.forList();
Fold<Customer, Double> orderTotals = CUSTOMER_ORDERS.andThen(
Getter.of(Order::total).asFold()
);
// Customers with any order over £500
Traversal<List<Customer>, Customer> bigSpenders =
allCustomers.filterBy(orderTotals, total -> total > 500);
List<Customer> highValue = Traversals.getAll(bigSpenders, customers);
System.out.println("Customers with orders over £500: " +
highValue.stream().map(Customer::name).toList());
// --- Analysis 2: Premium Product Buyers ---
System.out.println("\n--- Analysis 2: Premium Product Buyers ---");
Fold<Customer, Item> premiumItems = ALL_CUSTOMER_ITEMS.filtered(Item::premium);
for (Customer customer : customers) {
int premiumCount = premiumItems.length(customer);
if (premiumCount > 0) {
double premiumSpend = premiumItems.foldMap(Monoids.doubleAddition(),
item -> (double) item.price(), customer);
System.out.printf("%s: %d premium items, £%.2f total%n",
customer.name(), premiumCount, premiumSpend);
}
}
// --- Analysis 3: Category-Specific Queries ---
System.out.println("\n--- Analysis 3: Electronics Spending ---");
Fold<Customer, Item> electronicsItems =
ALL_CUSTOMER_ITEMS.filtered(item -> "Electronics".equals(item.category()));
for (Customer customer : customers) {
double electronicsSpend = electronicsItems.foldMap(Monoids.doubleAddition(),
item -> (double) item.price(), customer);
if (electronicsSpend > 0) {
System.out.printf("%s spent £%.2f on Electronics%n",
customer.name(), electronicsSpend);
}
}
// --- Analysis 4: Mark VIP Customers ---
System.out.println("\n--- Analysis 4: Auto-Mark VIP Customers ---");
// Customers who bought premium items AND have any order over £300
Traversal<List<Customer>, Customer> potentialVIPs =
allCustomers
.filterBy(ALL_CUSTOMER_ITEMS, Item::premium) // Has premium items
.filterBy(orderTotals, total -> total > 300); // Has high-value orders
Lens<Customer, Boolean> vipLens =
Lens.of(Customer::vip, (c, v) -> new Customer(c.name(), c.orders(), v));
List<Customer> updatedCustomers = Traversals.modify(
potentialVIPs.andThen(vipLens.asTraversal()),
_ -> true,
customers
);
for (Customer c : updatedCustomers) {
if (c.vip()) {
System.out.println(c.name() + " is now VIP");
}
}
// --- Analysis 5: Aggregated Statistics ---
System.out.println("\n--- Analysis 5: Platform Statistics ---");
Fold<List<Customer>, Customer> customerFold = Fold.of(list -> list);
Fold<List<Customer>, Item> allItems = customerFold.andThen(ALL_CUSTOMER_ITEMS);
Fold<List<Customer>, Item> expensiveItems = allItems.filtered(i -> i.price() > 100);
Fold<List<Customer>, Item> cheapItems = allItems.filtered(i -> i.price() <= 100);
int totalExpensive = expensiveItems.length(customers);
int totalCheap = cheapItems.length(customers);
double expensiveRevenue = expensiveItems.foldMap(Monoids.doubleAddition(),
i -> (double) i.price(), customers);
System.out.printf("Expensive items (>£100): %d items, £%.2f revenue%n",
totalExpensive, expensiveRevenue);
System.out.printf("Budget items (≤£100): %d items%n", totalCheap);
System.out.println("\n=== END OF ANALYTICS ===");
}
private static List<Customer> createSampleData() {
return List.of(
new Customer("Alice", List.of(
new Order("A1", List.of(
new Item("Laptop", 999, "Electronics", true),
new Item("Mouse", 25, "Electronics", false)
), 1024.0),
new Order("A2", List.of(
new Item("Desk", 350, "Furniture", false)
), 350.0)
), false),
new Customer("Bob", List.of(
new Order("B1", List.of(
new Item("Book", 20, "Books", false),
new Item("Pen", 5, "Stationery", false)
), 25.0)
), false),
new Customer("Charlie", List.of(
new Order("C1", List.of(
new Item("Phone", 800, "Electronics", true),
new Item("Case", 50, "Accessories", false)
), 850.0),
new Order("C2", List.of(
new Item("Headphones", 250, "Electronics", true)
), 250.0)
), false)
);
}
}
Expected Output:
=== CUSTOMER ANALYTICS WITH FILTERED OPTICS ===
--- Analysis 1: High-Value Customers ---
Customers with orders over £500: [Alice, Charlie]
--- Analysis 2: Premium Product Buyers ---
Alice: 1 premium items, £999.00 total
Charlie: 2 premium items, £1050.00 total
--- Analysis 3: Electronics Spending ---
Alice spent £1024.00 on Electronics
Charlie spent £1050.00 on Electronics
--- Analysis 4: Auto-Mark VIP Customers ---
Alice is now VIP
Charlie is now VIP
--- Analysis 5: Platform Statistics ---
Expensive items (>£100): 5 items, £3149.00 revenue
Budget items (≤£100): 4 items
=== END OF ANALYTICS ===
The Relationship to Haskell's Lens Library
For those familiar with functional programming, higher-kinded-j's filtered optics are inspired by Haskell's lens library, specifically the filtered combinator.
In Haskell:
filtered :: (a -> Bool) -> Traversal' a a
This creates a traversal that focuses on the value only if it satisfies the predicate—exactly what our Traversals.filtered(Predicate) does.
Key differences:
- Higher-kinded-j uses explicit
Applicativeinstances rather than implicit type class resolution - Java's type system requires more explicit composition steps
- The
filterBymethod is an extension not present in standard lens
Further Reading:
- Haskell Lens Tutorial: Traversal - Original inspiration
- Optics By Example by Chris Penner - Comprehensive book on optics
- Monocle (Scala) - Similar library for Scala with
filteredsupport
Summary: The Power of Filtered Optics
Filtered optics bring declarative filtering into the heart of your optic compositions:
filtered(Predicate): Focus on elements matching a conditionfilterBy(Fold, Predicate): Focus on elements where a nested query matchesTraversals.filtered(Predicate): Create reusable affine filter combinators
These tools transform how you work with collections in immutable data structures:
| Before (Imperative) | After (Declarative) |
|---|---|
| Manual loops with conditionals | Single filtered traversal |
| Stream pipelines breaking composition | Filters as part of optic chain |
| Logic scattered across codebase | Reusable, composable filter optics |
| Mix of "what" and "how" | Pure expression of intent |
By incorporating filtered optics into your toolkit, you gain:
- Expressiveness: Say "active users" once, use everywhere
- Composability: Chain filters, compose with lenses, build complex paths
- Type safety: All operations checked at compile time
- Immutability: Structure preserved, only targets modified
- Performance: Single-pass, lazy evaluation, no intermediate collections
Filtered optics represent the pinnacle of declarative data manipulation in Java—where the what (your business logic) is cleanly separated from the how (iteration, filtering, reconstruction), all whilst maintaining full type safety and referential transparency.
Previous: Folds: Querying Immutable Data Next: Indexed Optics: Position-Aware Operations
Indexed Optics: Position-Aware Operations
Tracking Indices During Transformations

- How to access both index and value during optic operations
- Using IndexedTraversal for position-aware bulk updates
- Using IndexedFold for queries that need position information
- Using IndexedLens for field name tracking and debugging
- Creating indexed traversals for Lists and Maps with IndexedTraversals utility
- Composing indexed optics with paired indices (Pair<I, J>)
- Converting between indexed and non-indexed optics
- When to use indexed optics vs standard optics
- Real-world patterns for debugging, audit trails, and position-based logic
In our journey through optics, we've mastered how to focus on parts of immutable data structures—whether it's a single field with Lens, an optional value with Prism, or multiple elements with Traversal. But sometimes, knowing where you are is just as important as knowing what you're looking at.
Consider these scenarios:
- Numbering items in a packing list: "Item 1: Laptop, Item 2: Mouse..."
- Tracking field names for audit logs: "User modified field 'email' from..."
- Processing map entries where both key and value matter: "For metadata key 'priority', set value to..."
- Debugging nested updates by seeing the complete path: "Changed scores[2] from 100 to 150"
Standard optics give you the value. Indexed optics give you both the index and the value.
The Scenario: E-Commerce Order Processing
Imagine building an order fulfilment system where position information drives business logic.
The Data Model:
@GenerateLenses
public record LineItem(String productName, int quantity, double price) {}
@GenerateLenses
@GenerateTraversals
public record Order(String orderId, List<LineItem> items, Map<String, String> metadata) {}
@GenerateLenses
public record Customer(String name, String email) {}
Business Requirements:
- Generate packing slips with numbered items: "Item 1: Laptop (£999.99)"
- Process metadata with key awareness: "Set shipping method based on 'priority' key"
- Audit trail showing which fields were modified: "Updated Customer.email at 2025-01-15 10:30"
- Position-based pricing for bulk orders: "Items at even positions get 10% discount"
The Traditional Approach:
// Verbose: Manual index tracking
List<String> packingSlip = new ArrayList<>();
for (int i = 0; i < order.items().size(); i++) {
LineItem item = order.items().get(i);
packingSlip.add("Item " + (i + 1) + ": " + item.productName());
}
// Or with streams, losing type-safety
AtomicInteger counter = new AtomicInteger(1);
order.items().stream()
.map(item -> "Item " + counter.getAndIncrement() + ": " + item.productName())
.collect(toList());
// Map processing requires breaking into entries
order.metadata().entrySet().stream()
.map(entry -> processWithKey(entry.getKey(), entry.getValue()))
.collect(toMap(Entry::getKey, Entry::getValue));
This approach forces manual index management, mixing the what (transformation logic) with the how (index tracking). Indexed optics provide a declarative, type-safe solution.
Think of Indexed Optics Like...
- GPS coordinates: Not just the destination, but the latitude and longitude
- Line numbers in an editor: Every line knows its position in the file
- Map.Entry: Provides both key and value instead of just the value
- Breadcrumbs in a file system: Showing the complete path to each file
- A numbered list: Each element has both content and a position
- Spreadsheet cells: Both the cell reference (A1, B2) and the value
The key insight: indexed optics make position a first-class citizen, accessible during every operation.
Part I: The Basics
The Three Indexed Optics
Higher-kinded-j provides three indexed optics that mirror their standard counterparts:
| Standard Optic | Indexed Variant | Index Type | Use Case |
|---|---|---|---|
| Traversal<S, A> | IndexedTraversal<I, S, A> | I (any type) | Position-aware bulk updates (List indices, Map keys) |
| Fold<S, A> | IndexedFold<I, S, A> | I (any type) | Position-aware read-only queries |
| Lens<S, A> | IndexedLens<I, S, A> | I (any type) | Field name tracking for single-field access |
The additional type parameter I represents the index type:
- For
List<A>:IisInteger(position 0, 1, 2...) - For
Map<K, V>:IisK(the key type) - For record fields:
IisString(field name) - Custom: Any type that makes sense for your domain
A Step-by-Step Walkthrough
Step 1: Creating Indexed Traversals
The IndexedTraversals utility class provides factory methods for common cases.
For Lists: Integer Indices
import org.higherkindedj.optics.indexed.IndexedTraversal;
import org.higherkindedj.optics.util.IndexedTraversals;
// Create an indexed traversal for List elements
IndexedTraversal<Integer, List<LineItem>, LineItem> itemsWithIndex =
IndexedTraversals.forList();
List<LineItem> items = List.of(
new LineItem("Laptop", 1, 999.99),
new LineItem("Mouse", 2, 24.99),
new LineItem("Keyboard", 1, 79.99)
);
The forList() factory creates a traversal where each element is paired with its zero-based index.
For Maps: Key-Based Indices
// Create an indexed traversal for Map values
IndexedTraversal<String, Map<String, String>, String> metadataWithKeys =
IndexedTraversals.forMap();
Map<String, String> metadata = Map.of(
"priority", "express",
"gift-wrap", "true",
"delivery-note", "Leave at door"
);
The forMap() factory creates a traversal where each value is paired with its key.
Step 2: Accessing Index-Value Pairs
Indexed optics provide specialized methods that give you access to both the index and the value.
Extracting All Index-Value Pairs
import org.higherkindedj.optics.indexed.Pair;
// Get list of (index, item) pairs
List<Pair<Integer, LineItem>> indexedItems = itemsWithIndex.toIndexedList(items);
for (Pair<Integer, LineItem> pair : indexedItems) {
int position = pair.first();
LineItem item = pair.second();
System.out.println("Position " + position + ": " + item.productName());
}
// Output:
// Position 0: Laptop
// Position 1: Mouse
// Position 2: Keyboard
Using IndexedFold for Queries
import org.higherkindedj.optics.indexed.IndexedFold;
// Convert to read-only indexed fold
IndexedFold<Integer, List<LineItem>, LineItem> itemsFold =
itemsWithIndex.asIndexedFold();
// Find item at a specific position
Pair<Integer, LineItem> found = itemsFold.findWithIndex(
(index, item) -> index == 1,
items
).orElse(null);
System.out.println("Item at index 1: " + found.second().productName());
// Output: Item at index 1: Mouse
// Check if any even-positioned item is expensive
boolean hasExpensiveEven = itemsFold.existsWithIndex(
(index, item) -> index % 2 == 0 && item.price() > 500,
items
);
Step 3: Position-Aware Modifications
The real power emerges when you modify elements based on their position.
Numbering Items in a Packing Slip
// Modify product names to include position numbers
List<LineItem> numbered = IndexedTraversals.imodify(
itemsWithIndex,
(index, item) -> new LineItem(
"Item " + (index + 1) + ": " + item.productName(),
item.quantity(),
item.price()
),
items
);
for (LineItem item : numbered) {
System.out.println(item.productName());
}
// Output:
// Item 1: Laptop
// Item 2: Mouse
// Item 3: Keyboard
Position-Based Discount Logic
// Apply 10% discount to items at even positions (0, 2, 4...)
List<LineItem> discounted = IndexedTraversals.imodify(
itemsWithIndex,
(index, item) -> {
if (index % 2 == 0) {
double discountedPrice = item.price() * 0.9;
return new LineItem(item.productName(), item.quantity(), discountedPrice);
}
return item;
},
items
);
// Position 0 (Laptop): £999.99 → £899.99
// Position 1 (Mouse): £24.99 (unchanged)
// Position 2 (Keyboard): £79.99 → £71.99
Map Processing with Key Awareness
IndexedTraversal<String, Map<String, String>, String> metadataTraversal =
IndexedTraversals.forMap();
Map<String, String> processed = IndexedTraversals.imodify(
metadataTraversal,
(key, value) -> {
// Add key prefix to all values for debugging
return "[" + key + "] " + value;
},
metadata
);
// Results:
// "priority" → "[priority] express"
// "gift-wrap" → "[gift-wrap] true"
// "delivery-note" → "[delivery-note] Leave at door"
Step 4: Filtering with Index Awareness
Indexed traversals support filtering, allowing you to focus on specific positions or keys.
Filter by Index
// Focus only on even-positioned items
IndexedTraversal<Integer, List<LineItem>, LineItem> evenPositions =
itemsWithIndex.filterIndex(index -> index % 2 == 0);
List<Pair<Integer, LineItem>> evenItems =
IndexedTraversals.toIndexedList(evenPositions, items);
// Returns: [(0, Laptop), (2, Keyboard)]
// Modify only even-positioned items
List<LineItem> result = IndexedTraversals.imodify(
evenPositions,
(index, item) -> new LineItem(
item.productName() + " [SALE]",
item.quantity(),
item.price()
),
items
);
// Laptop and Keyboard get "[SALE]" suffix, Mouse unchanged
Filter by Value with Index Available
// Focus on expensive items, but still track their original positions
IndexedTraversal<Integer, List<LineItem>, LineItem> expensiveItems =
itemsWithIndex.filteredWithIndex((index, item) -> item.price() > 50);
List<Pair<Integer, LineItem>> expensive =
IndexedTraversals.toIndexedList(expensiveItems, items);
// Returns: [(0, Laptop), (2, Keyboard)]
// Notice: indices are preserved (0 and 2), not renumbered
Filter Map by Key Pattern
// Focus on metadata keys starting with "delivery"
IndexedTraversal<String, Map<String, String>, String> deliveryMetadata =
metadataTraversal.filterIndex(key -> key.startsWith("delivery"));
List<Pair<String, String>> deliveryEntries =
deliveryMetadata.toIndexedList(metadata);
// Returns: [("delivery-note", "Leave at door")]
Step 5: IndexedLens for Field Tracking
An IndexedLens focuses on exactly one field whilst providing its name or identifier.
import org.higherkindedj.optics.indexed.IndexedLens;
// Create an indexed lens for the customer email field
IndexedLens<String, Customer, String> emailLens = IndexedLens.of(
"email", // The index: field name
Customer::email, // Getter
(customer, newEmail) -> new Customer(customer.name(), newEmail) // Setter
);
Customer customer = new Customer("Alice", "alice@example.com");
// Get both field name and value
Pair<String, String> fieldInfo = emailLens.iget(customer);
System.out.println("Field: " + fieldInfo.first()); // email
System.out.println("Value: " + fieldInfo.second()); // alice@example.com
// Modify with field name awareness
Customer updated = emailLens.imodify(
(fieldName, oldValue) -> {
System.out.println("Updating field '" + fieldName + "' from " + oldValue);
return "alice.smith@example.com";
},
customer
);
// Output: Updating field 'email' from alice@example.com
Use case: Audit logging that records which field changed, not just the new value.
Step 6: Converting Between Indexed and Non-Indexed
Every indexed optic can be converted to its standard (non-indexed) counterpart.
import org.higherkindedj.optics.Traversal;
// Start with indexed traversal
IndexedTraversal<Integer, List<LineItem>, LineItem> indexed =
IndexedTraversals.forList();
// Drop the index to get a standard traversal
Traversal<List<LineItem>, LineItem> standard = indexed.unindexed();
// Now you can use standard traversal methods
List<LineItem> uppercased = Traversals.modify(
standard.andThen(Lens.of(
LineItem::productName,
(item, name) -> new LineItem(name, item.quantity(), item.price())
).asTraversal()),
String::toUpperCase,
items
);
When to convert: When you need the index for some operations but not others, start indexed and convert as needed.
When to Use Indexed Optics vs Standard Optics
Understanding when indexed optics add value is crucial for writing clear, maintainable code.
Use Indexed Optics When:
- Position-based logic - Different behaviour for even/odd indices, first/last elements
- Numbering or labelling - Adding sequence numbers, prefixes, or position markers
- Map operations - Both key and value are needed during transformation
- Audit trails - Recording which field or position was modified
- Debugging complex updates - Tracking the path to each change
- Index-based filtering - Operating on specific positions or key patterns
// Perfect: Position drives the logic
IndexedTraversal<Integer, List<Product>, Product> productsIndexed =
IndexedTraversals.forList();
List<Product> prioritised = productsIndexed.imodify(
(index, product) -> {
// First 3 products get express shipping
String shipping = index < 3 ? "express" : "standard";
return product.withShipping(shipping);
},
products
);
Use Standard Optics When:
- Position irrelevant - Pure value transformations
- Simpler code - Index tracking adds unnecessary complexity
- Performance critical - Minimal overhead needed (though indexed optics are optimised)
- No positional logic - All elements treated identically
// Better with standard optics: Index not needed
Traversal<List<Product>, Double> prices =
Traversals.<Product>forList()
.andThen(ProductLenses.price().asTraversal());
List<Product> inflated = Traversals.modify(prices, price -> price * 1.1, products);
// All prices increased by 10%, position doesn't matter
Common Patterns: Position-Based Operations
Pattern 1: Adding Sequence Numbers
// Generate a numbered list for display
IndexedTraversal<Integer, List<String>, String> indexed = IndexedTraversals.forList();
List<String> tasks = List.of("Review PR", "Update docs", "Run tests");
List<String> numbered = IndexedTraversals.imodify(
indexed,
(i, task) -> (i + 1) + ". " + task,
tasks
);
// ["1. Review PR", "2. Update docs", "3. Run tests"]
Pattern 2: First/Last Element Special Handling
IndexedTraversal<Integer, List<LineItem>, LineItem> itemsIndexed =
IndexedTraversals.forList();
List<LineItem> items = List.of(/* ... */);
int lastIndex = items.size() - 1;
List<LineItem> marked = IndexedTraversals.imodify(
itemsIndexed,
(index, item) -> {
String marker = "";
if (index == 0) marker = "[FIRST] ";
if (index == lastIndex) marker = "[LAST] ";
return new LineItem(
marker + item.productName(),
item.quantity(),
item.price()
);
},
items
);
Pattern 3: Map Key-Value Transformations
IndexedTraversal<String, Map<String, Integer>, Integer> mapIndexed =
IndexedTraversals.forMap();
Map<String, Integer> scores = Map.of(
"alice", 100,
"bob", 85,
"charlie", 92
);
// Create display strings incorporating both key and value
List<String> results = IndexedTraversals.toIndexedList(mapIndexed, scores).stream()
.map(pair -> pair.first() + " scored " + pair.second())
.toList();
// ["alice scored 100", "bob scored 85", "charlie scored 92"]
Pattern 4: Position-Based Filtering
IndexedTraversal<Integer, List<String>, String> indexed = IndexedTraversals.forList();
List<String> values = List.of("a", "b", "c", "d", "e", "f");
// Take only odd positions (1, 3, 5)
IndexedTraversal<Integer, List<String>, String> oddPositions =
indexed.filterIndex(i -> i % 2 == 1);
List<String> odd = IndexedTraversals.getAll(oddPositions, values);
// ["b", "d", "f"]
Common Pitfalls
❌ Don't Do This:
// Inefficient: Recreating indexed traversals in loops
for (Order order : orders) {
var indexed = IndexedTraversals.<LineItem>forList();
IndexedTraversals.imodify(indexed, (i, item) -> numberItem(i, item), order.items());
}
// Over-engineering: Using indexed optics when index isn't needed
IndexedTraversal<Integer, List<String>, String> indexed = IndexedTraversals.forList();
List<String> upper = IndexedTraversals.imodify(indexed, (i, s) -> s.toUpperCase(), list);
// Index parameter 'i' is never used! Use standard Traversals.modify()
// Confusing: Manual index tracking alongside indexed optics
AtomicInteger counter = new AtomicInteger(0);
IndexedTraversals.imodify(indexed, (i, item) -> {
int myIndex = counter.getAndIncrement(); // Redundant!
return process(myIndex, item);
}, items);
// Wrong: Expecting indices to be renumbered after filtering
IndexedTraversal<Integer, List<String>, String> evenOnly =
indexed.filterIndex(i -> i % 2 == 0);
List<Pair<Integer, String>> pairs = IndexedTraversals.toIndexedList(evenOnly, list);
// Indices are [0, 2, 4], NOT [0, 1, 2] - original positions preserved!
✅ Do This Instead:
// Efficient: Create indexed traversal once, reuse many times
IndexedTraversal<Integer, List<LineItem>, LineItem> itemsIndexed =
IndexedTraversals.forList();
for (Order order : orders) {
IndexedTraversals.imodify(itemsIndexed, (i, item) -> numberItem(i, item), order.items());
}
// Simple: Use standard traversals when index isn't needed
Traversal<List<String>, String> standard = Traversals.forList();
List<String> upper = Traversals.modify(standard, String::toUpperCase, list);
// Clear: Trust the indexed optic to provide correct indices
IndexedTraversals.imodify(indexed, (providedIndex, item) -> {
// Use providedIndex directly, it's correct
return process(providedIndex, item);
}, items);
// Understand: Filtered indexed traversals preserve original indices
IndexedTraversal<Integer, List<String>, String> evenOnly =
indexed.filterIndex(i -> i % 2 == 0);
List<Pair<Integer, String>> pairs = IndexedTraversals.toIndexedList(evenOnly, list);
// If you need renumbered indices, transform after extraction:
List<Pair<Integer, String>> renumbered = IntStream.range(0, pairs.size())
.mapToObj(newIndex -> new Pair<>(newIndex, pairs.get(newIndex).second()))
.toList();
Performance Notes
Indexed optics are designed to be efficient:
- No additional traversals - Index computed during normal iteration
- Lazy index creation -
Pair<I, A>objects only created when needed - Minimal overhead - Index tracking adds negligible cost
- Reusable compositions - Indexed optics can be composed and cached
- No boxing for primitives - When using integer indices directly
Best Practice: Create indexed optics once and store as constants:
public class OrderOptics {
public static final IndexedTraversal<Integer, List<LineItem>, LineItem>
ITEMS_WITH_INDEX = IndexedTraversals.forList();
public static final IndexedTraversal<String, Map<String, String>, String>
METADATA_WITH_KEYS = IndexedTraversals.forMap();
// Compose with filtering
public static final IndexedTraversal<Integer, List<LineItem>, LineItem>
EVEN_POSITIONED_ITEMS = ITEMS_WITH_INDEX.filterIndex(i -> i % 2 == 0);
}
Part II: Advanced Topics
Composing Indexed Optics with Paired Indices
When you compose two indexed optics, the indices form a pair representing the path through nested structures.
import org.higherkindedj.optics.indexed.Pair;
// Nested structure: List of Orders, each with List of Items
record Order(String id, List<LineItem> items) {}
// First level: indexed traversal for orders
IndexedTraversal<Integer, List<Order>, Order> ordersIndexed =
IndexedTraversals.forList();
// Second level: lens to items field
Lens<Order, List<LineItem>> itemsLens =
Lens.of(Order::items, (order, items) -> new Order(order.id(), items));
// Third level: indexed traversal for items
IndexedTraversal<Integer, List<LineItem>, LineItem> itemsIndexed =
IndexedTraversals.forList();
// Compose: orders → items field → each item with PAIRED indices
IndexedTraversal<Pair<Integer, Integer>, List<Order>, LineItem> composed =
ordersIndexed
.iandThen(itemsLens)
.iandThen(itemsIndexed);
List<Order> orders = List.of(
new Order("ORD-1", List.of(
new LineItem("Laptop", 1, 999.99),
new LineItem("Mouse", 1, 24.99)
)),
new Order("ORD-2", List.of(
new LineItem("Keyboard", 1, 79.99),
new LineItem("Monitor", 1, 299.99)
))
);
// Access with paired indices: (order index, item index)
List<Pair<Pair<Integer, Integer>, LineItem>> all = composed.toIndexedList(orders);
for (Pair<Pair<Integer, Integer>, LineItem> entry : all) {
Pair<Integer, Integer> indices = entry.first();
LineItem item = entry.second();
System.out.printf("Order %d, Item %d: %s%n",
indices.first(), indices.second(), item.productName());
}
// Output:
// Order 0, Item 0: Laptop
// Order 0, Item 1: Mouse
// Order 1, Item 0: Keyboard
// Order 1, Item 1: Monitor
Use case: Generating globally unique identifiers like "Order 3, Item 5" or "Row 2, Column 7".
Index Transformation and Mapping
You can transform indices whilst preserving the optic composition.
// Start with integer indices (0, 1, 2...)
IndexedTraversal<Integer, List<LineItem>, LineItem> zeroIndexed =
IndexedTraversals.forList();
// Transform to 1-based indices (1, 2, 3...)
IndexedTraversal<Integer, List<LineItem>, LineItem> oneIndexed =
zeroIndexed.reindex(i -> i + 1);
List<LineItem> items = List.of(/* ... */);
List<String> numbered = oneIndexed.imodify(
(index, item) -> "Item " + index + ": " + item.productName(),
items
).stream()
.map(LineItem::productName)
.toList();
// ["Item 1: Laptop", "Item 2: Mouse", "Item 3: Keyboard"]
Note: The reindex method is conceptual. In practice, you'd transform indices in your imodify function:
zeroIndexed.imodify((zeroBasedIndex, item) -> {
int oneBasedIndex = zeroBasedIndex + 1;
return new LineItem("Item " + oneBasedIndex + ": " + item.productName(),
item.quantity(), item.price());
}, items);
Combining Index Filtering with Value Filtering
You can layer multiple filters for precise control.
IndexedTraversal<Integer, List<LineItem>, LineItem> itemsIndexed =
IndexedTraversals.forList();
// Filter: even positions AND expensive items
IndexedTraversal<Integer, List<LineItem>, LineItem> targeted =
itemsIndexed
.filterIndex(i -> i % 2 == 0) // Even positions only
.filtered(item -> item.price() > 50); // Expensive items only
List<LineItem> items = List.of(
new LineItem("Laptop", 1, 999.99), // Index 0, expensive ✓
new LineItem("Pen", 1, 2.99), // Index 1, cheap ✗
new LineItem("Keyboard", 1, 79.99), // Index 2, expensive ✓
new LineItem("Mouse", 1, 24.99), // Index 3, cheap ✗
new LineItem("Monitor", 1, 299.99) // Index 4, expensive ✓
);
List<Pair<Integer, LineItem>> results = targeted.toIndexedList(items);
// Returns: [(0, Laptop), (2, Keyboard), (4, Monitor)]
// All at even positions AND expensive
Audit Trail Pattern: Field Change Tracking
A powerful real-world pattern is tracking which fields change in your domain objects.
// Generic field audit logger
public class AuditLog {
public record FieldChange<A>(
String fieldName,
A oldValue,
A newValue,
Instant timestamp
) {}
public static <A> Function<Pair<String, A>, A> loggedModification(
Function<A, A> transformation,
List<FieldChange<?>> auditLog
) {
return pair -> {
String fieldName = pair.first();
A oldValue = pair.second();
A newValue = transformation.apply(oldValue);
if (!oldValue.equals(newValue)) {
auditLog.add(new FieldChange<>(
fieldName,
oldValue,
newValue,
Instant.now()
));
}
return newValue;
};
}
}
// Usage with indexed lens
IndexedLens<String, Customer, String> emailLens = IndexedLens.of(
"email",
Customer::email,
(c, email) -> new Customer(c.name(), email)
);
List<AuditLog.FieldChange<?>> audit = new ArrayList<>();
Customer customer = new Customer("Alice", "alice@old.com");
Customer updated = emailLens.imodify(
AuditLog.loggedModification(
email -> "alice@new.com",
audit
),
customer
);
// Check audit log
for (AuditLog.FieldChange<?> change : audit) {
System.out.printf("Field '%s' changed from %s to %s at %s%n",
change.fieldName(),
change.oldValue(),
change.newValue(),
change.timestamp()
);
}
// Output: Field 'email' changed from alice@old.com to alice@new.com at 2025-01-15T10:30:00Z
Debugging Pattern: Path Tracking in Nested Updates
When debugging complex nested updates, indexed optics reveal the complete path to each modification.
// Nested structure with multiple levels
record Item(String name, double price) {}
record Order(List<Item> items) {}
record Customer(String name, List<Order> orders) {}
// Build an indexed path through the structure
IndexedTraversal<Integer, List<Customer>, Customer> customersIdx =
IndexedTraversals.forList();
Lens<Customer, List<Order>> ordersLens =
Lens.of(Customer::orders, (c, o) -> new Customer(c.name(), o));
IndexedTraversal<Integer, List<Order>, Order> ordersIdx =
IndexedTraversals.forList();
Lens<Order, List<Item>> itemsLens =
Lens.of(Order::items, (order, items) -> new Order(items));
IndexedTraversal<Integer, List<Item>, Item> itemsIdx =
IndexedTraversals.forList();
Lens<Item, Double> priceLens =
Lens.of(Item::price, (item, price) -> new Item(item.name(), price));
// Compose the full indexed path
IndexedTraversal<Pair<Pair<Integer, Integer>, Integer>, List<Customer>, Double> fullPath =
customersIdx
.iandThen(ordersLens)
.iandThen(ordersIdx)
.iandThen(itemsLens)
.iandThen(itemsIdx)
.iandThen(priceLens);
List<Customer> customers = List.of(/* ... */);
// Modify with full path visibility
List<Customer> updated = fullPath.imodify(
(indices, price) -> {
int customerIdx = indices.first().first();
int orderIdx = indices.first().second();
int itemIdx = indices.second();
System.out.printf(
"Updating price at [customer=%d, order=%d, item=%d]: %.2f → %.2f%n",
customerIdx, orderIdx, itemIdx, price, price * 1.1
);
return price * 1.1; // 10% increase
},
customers
);
// Output shows complete path to every modified price:
// Updating price at [customer=0, order=0, item=0]: 999.99 → 1099.99
// Updating price at [customer=0, order=0, item=1]: 24.99 → 27.49
// Updating price at [customer=0, order=1, item=0]: 79.99 → 87.99
// ...
Working with Pair Utilities
The Pair<A, B> type provides utility methods for manipulation.
import org.higherkindedj.optics.indexed.Pair;
Pair<Integer, String> pair = new Pair<>(1, "Hello");
// Access components
int first = pair.first(); // 1
String second = pair.second(); // "Hello"
// Transform components
Pair<Integer, String> modified = pair.withSecond("World");
// Result: Pair(1, "World")
Pair<String, String> transformed = pair.withFirst("One");
// Result: Pair("One", "Hello")
// Swap
Pair<String, Integer> swapped = pair.swap();
// Result: Pair("Hello", 1)
// Factory method
Pair<String, Integer> created = Pair.of("Key", 42);
For converting to/from Tuple2 (when working with hkj-core utilities):
import org.higherkindedj.hkt.Tuple2;
import org.higherkindedj.optics.util.IndexedTraversals;
Pair<String, Integer> pair = Pair.of("key", 100);
// Convert to Tuple2
Tuple2<String, Integer> tuple = IndexedTraversals.pairToTuple2(pair);
// Convert back to Pair
Pair<String, Integer> converted = IndexedTraversals.tuple2ToPair(tuple);
Real-World Example: Order Fulfilment Dashboard
Here's a comprehensive example demonstrating indexed optics in a business context.
package org.higherkindedj.example.optics;
import java.time.Instant;
import java.util.*;
import org.higherkindedj.optics.indexed.*;
import org.higherkindedj.optics.util.IndexedTraversals;
public class OrderFulfilmentDashboard {
public record LineItem(String productName, int quantity, double price) {}
public record Order(
String orderId,
List<LineItem> items,
Map<String, String> metadata
) {}
public static void main(String[] args) {
Order order = new Order(
"ORD-12345",
List.of(
new LineItem("Laptop", 1, 999.99),
new LineItem("Mouse", 2, 24.99),
new LineItem("Keyboard", 1, 79.99),
new LineItem("Monitor", 1, 299.99)
),
new LinkedHashMap<>(Map.of(
"priority", "express",
"gift-wrap", "true",
"delivery-note", "Leave at door"
))
);
System.out.println("=== ORDER FULFILMENT DASHBOARD ===\n");
// --- Task 1: Generate Packing Slip ---
System.out.println("--- Packing Slip ---");
generatePackingSlip(order);
// --- Task 2: Apply Position-Based Discounts ---
System.out.println("\n--- Position-Based Discounts ---");
Order discounted = applyPositionDiscounts(order);
System.out.println("Original total: £" + calculateTotal(order));
System.out.println("Discounted total: £" + calculateTotal(discounted));
// --- Task 3: Process Metadata with Key Awareness ---
System.out.println("\n--- Metadata Processing ---");
processMetadata(order);
// --- Task 4: Identify High-Value Positions ---
System.out.println("\n--- High-Value Items ---");
identifyHighValuePositions(order);
System.out.println("\n=== END OF DASHBOARD ===");
}
private static void generatePackingSlip(Order order) {
IndexedTraversal<Integer, List<LineItem>, LineItem> itemsIndexed =
IndexedTraversals.forList();
List<Pair<Integer, LineItem>> indexedItems =
itemsIndexed.toIndexedList(order.items());
System.out.println("Order: " + order.orderId());
for (Pair<Integer, LineItem> pair : indexedItems) {
int position = pair.first() + 1; // 1-based for display
LineItem item = pair.second();
System.out.printf(" Item %d: %s (Qty: %d) - £%.2f%n",
position,
item.productName(),
item.quantity(),
item.price() * item.quantity()
);
}
}
private static Order applyPositionDiscounts(Order order) {
IndexedTraversal<Integer, List<LineItem>, LineItem> itemsIndexed =
IndexedTraversals.forList();
// Every 3rd item gets 15% off (positions 2, 5, 8...)
List<LineItem> discounted = itemsIndexed.imodify(
(index, item) -> {
if ((index + 1) % 3 == 0) {
double newPrice = item.price() * 0.85;
System.out.printf(" Position %d (%s): £%.2f → £%.2f (15%% off)%n",
index + 1, item.productName(), item.price(), newPrice);
return new LineItem(item.productName(), item.quantity(), newPrice);
}
return item;
},
order.items()
);
return new Order(order.orderId(), discounted, order.metadata());
}
private static void processMetadata(Order order) {
IndexedTraversal<String, Map<String, String>, String> metadataIndexed =
IndexedTraversals.forMap();
IndexedFold<String, Map<String, String>, String> fold =
metadataIndexed.asIndexedFold();
List<Pair<String, String>> entries = fold.toIndexedList(order.metadata());
for (Pair<String, String> entry : entries) {
String key = entry.first();
String value = entry.second();
// Process based on key
switch (key) {
case "priority" ->
System.out.println(" Shipping priority: " + value.toUpperCase());
case "gift-wrap" ->
System.out.println(" Gift wrapping: " +
(value.equals("true") ? "Required" : "Not required"));
case "delivery-note" ->
System.out.println(" Special instructions: " + value);
default ->
System.out.println(" " + key + ": " + value);
}
}
}
private static void identifyHighValuePositions(Order order) {
IndexedTraversal<Integer, List<LineItem>, LineItem> itemsIndexed =
IndexedTraversals.forList();
// Filter to items over £100
IndexedTraversal<Integer, List<LineItem>, LineItem> highValue =
itemsIndexed.filteredWithIndex((index, item) -> item.price() > 100);
List<Pair<Integer, LineItem>> expensive = highValue.toIndexedList(order.items());
System.out.println(" Items over £100 (require special handling):");
for (Pair<Integer, LineItem> pair : expensive) {
System.out.printf(" Position %d: %s (£%.2f)%n",
pair.first() + 1,
pair.second().productName(),
pair.second().price()
);
}
}
private static double calculateTotal(Order order) {
return order.items().stream()
.mapToDouble(item -> item.price() * item.quantity())
.sum();
}
}
Expected Output:
=== ORDER FULFILMENT DASHBOARD ===
--- Packing Slip ---
Order: ORD-12345
Item 1: Laptop (Qty: 1) - £999.99
Item 2: Mouse (Qty: 2) - £49.98
Item 3: Keyboard (Qty: 1) - £79.99
Item 4: Monitor (Qty: 1) - £299.99
--- Position-Based Discounts ---
Position 3 (Keyboard): £79.99 → £67.99 (15% off)
Original total: £1429.95
Discounted total: £1417.95
--- Metadata Processing ---
Shipping priority: EXPRESS
Gift wrapping: Required
Special instructions: Leave at door
--- High-Value Items ---
Items over £100 (require special handling):
Position 1: Laptop (£999.99)
Position 4: Monitor (£299.99)
=== END OF DASHBOARD ===
The Relationship to Haskell's Lens Library
For those familiar with functional programming, higher-kinded-j's indexed optics are inspired by Haskell's lens library, specifically indexed traversals and indexed folds.
In Haskell:
itraversed :: IndexedTraversal Int ([] a) a
This creates an indexed traversal over lists where the index is an integer—exactly what our IndexedTraversals.forList() provides.
Key differences:
- Higher-kinded-j uses explicit
Applicativeinstances rather than implicit type class resolution - Java's type system requires explicit
Pair<I, A>for index-value pairs - The
imodifyandigetmethods provide a more Java-friendly API - Map-based traversals (
forMap) are a practical extension for Java's collection library
Further Reading:
- Haskell Lens Tutorial: Indexed Optics - Original inspiration
- Optics By Example by Chris Penner - Chapter on indexed optics
- Monocle (Scala) - Similar indexed optics for Scala
Summary: The Power of Indexed Optics
Indexed optics bring position awareness into your functional data transformations:
- IndexedTraversal<I, S, A>: Bulk operations with index tracking
- IndexedFold<I, S, A>: Read-only queries with position information
- IndexedLens<I, S, A>: Single-field access with field name tracking
These tools transform how you work with collections and records:
| Before (Manual Index Tracking) | After (Declarative Indexed Optics) |
|---|---|
| Manual loop counters | Built-in index access |
| AtomicInteger for streams | Type-safe imodify |
| Breaking into Map.entrySet() | Direct key-value processing |
| Complex audit logging logic | Field tracking with IndexedLens |
| Scattered position logic | Composable indexed transformations |
By incorporating indexed optics into your toolkit, you gain:
- Expressiveness: Say "numbered list items" declaratively
- Type safety: Compile-time checked index types
- Composability: Chain indexed optics, filter by position, compose with standard optics
- Debugging power: Track complete paths through nested structures
- Audit trails: Record which fields changed, not just values
- Performance: Minimal overhead, lazy index computation
Indexed optics represent the fusion of position awareness with functional composition—enabling you to write code that is simultaneously more declarative, more powerful, and more maintainable than traditional index-tracking approaches.
Previous: Common Data Structure Traversals Next: Getters: Read-Only Optics
Limiting Traversals: Focusing on List Portions
Declarative Slicing for Targeted Operations
- How to focus on specific portions of lists (first n, last n, slices)
- Using
ListTraversalsfactory methods for index-based operations - The difference between limiting traversals and Stream's
limit()/skip() - Composing limiting traversals with lenses, prisms, and filtered optics
- Understanding edge case handling (negative indices, bounds exceeding list size)
- Real-world patterns for pagination, batch processing, and time-series windowing
- When to use limiting traversals vs Stream API vs manual loops
In our journey through optics, we've seen how Traversal handles bulk operations on all elements of a collection, and how filtered optics let us focus on elements matching a predicate. But what about focusing on elements by position—the first few items, the last few, or a specific slice?
Traditionally, working with list portions requires breaking out of your optic composition to use streams or manual index manipulation. Limiting traversals solve this elegantly by making positional focus a first-class part of your optic composition.
The Scenario: Product Catalogue Management
Imagine you're building an e-commerce platform where you need to:
- Display only the first 10 products on a landing page
- Apply discounts to all except the last 3 featured items
- Process customer orders in chunks of 50 for batch shipping
- Analyse the most recent 7 days of time-series sales data
- Update metadata for products between positions 5 and 15 in a ranked list
The Data Model:
@GenerateLenses
public record Product(String sku, String name, double price, int stock) {
Product applyDiscount(double percentage) {
return new Product(sku, name, price * (1 - percentage), stock);
}
}
@GenerateLenses
public record Catalogue(String name, List<Product> products) {}
@GenerateLenses
public record Order(String id, List<LineItem> items, LocalDateTime created) {}
@GenerateLenses
public record LineItem(Product product, int quantity) {}
@GenerateLenses
public record SalesMetric(LocalDate date, double revenue, int transactions) {}
The Traditional Approach:
// Verbose: Manual slicing breaks optic composition
List<Product> firstTen = catalogue.products().subList(0, Math.min(10, catalogue.products().size()));
List<Product> discounted = firstTen.stream()
.map(p -> p.applyDiscount(0.1))
.collect(Collectors.toList());
// Now reconstruct the full list... tedious!
List<Product> fullList = new ArrayList<>(discounted);
fullList.addAll(catalogue.products().subList(Math.min(10, catalogue.products().size()), catalogue.products().size()));
Catalogue updated = new Catalogue(catalogue.name(), fullList);
// Even worse with nested structures
List<Order> chunk = orders.subList(startIndex, Math.min(startIndex + chunkSize, orders.size()));
// Process chunk... then what? How do we put it back?
This approach forces you to abandon the declarative power of optics, manually managing indices, bounds checking, and list reconstruction. Limiting traversals let you express this intent directly within your optic composition.
Think of Limiting Traversals Like...
- Java Stream's
limit()andskip(): Likestream.limit(n)andstream.skip(n), but composable with immutable data transformations and integrated into optic pipelines - SQL's LIMIT and OFFSET clauses: Like database pagination (
LIMIT 10 OFFSET 20), but for in-memory immutable structures—enabling declarative pagination logic - Spring Batch chunk processing: Similar to Spring Batch's chunk-oriented processing—divide a list into manageable segments for targeted transformation whilst preserving the complete dataset
- ArrayList.subList() but better: Like
List.subList(from, to), but instead of a mutable view, you get an immutable optic that composes with lenses, prisms, and filtered traversals
The key insight: positional focus becomes part of your optic's identity, not an external slicing operation applied afterwards.
Five Ways to Limit Focus
Higher-kinded-j's ListTraversals utility class provides five complementary factory methods:
| Method | Description | SQL Equivalent |
|---|---|---|
taking(n) | Focus on first n elements | LIMIT n |
dropping(n) | Skip first n, focus on rest | OFFSET n (then all) |
takingLast(n) | Focus on last n elements | ORDER BY id DESC LIMIT n |
droppingLast(n) | Focus on all except last n | LIMIT (size - n) |
slicing(from, to) | Focus on range [from, to) | LIMIT (to-from) OFFSET from |
Each serves different needs, and they can be combined with other optics for powerful compositions.
A Step-by-Step Walkthrough
Step 1: Basic Usage — taking(int n)
The most intuitive method: focus on at most the first n elements.
import org.higherkindedj.optics.util.ListTraversals;
import org.higherkindedj.optics.util.Traversals;
// Create a traversal for first 3 products
Traversal<List<Product>, Product> first3 = ListTraversals.taking(3);
List<Product> products = List.of(
new Product("SKU001", "Widget", 10.0, 100),
new Product("SKU002", "Gadget", 25.0, 50),
new Product("SKU003", "Gizmo", 15.0, 75),
new Product("SKU004", "Doohickey", 30.0, 25),
new Product("SKU005", "Thingamajig", 20.0, 60)
);
// Apply 10% discount to ONLY first 3 products
List<Product> result = Traversals.modify(first3, p -> p.applyDiscount(0.1), products);
// First 3 discounted; last 2 preserved unchanged
// Extract ONLY first 3 products
List<Product> firstThree = Traversals.getAll(first3, products);
// Returns: [Widget, Gadget, Gizmo]
Critical Semantic: During modification, non-focused elements are preserved unchanged in the structure. During queries (like getAll), they are excluded from the results. This preserves the overall structure whilst focusing operations on the subset you care about.
Step 2: Skipping Elements — dropping(int n)
Focus on all elements after skipping the first n:
// Skip first 2, focus on the rest
Traversal<List<Product>, Product> afterFirst2 = ListTraversals.dropping(2);
List<Product> result = Traversals.modify(afterFirst2, p -> p.applyDiscount(0.15), products);
// First 2 unchanged; last 3 get 15% discount
List<Product> skipped = Traversals.getAll(afterFirst2, products);
// Returns: [Gizmo, Doohickey, Thingamajig]
Step 3: Focusing on the End — takingLast(int n)
Focus on the last n elements—perfect for "most recent" scenarios:
// Focus on last 2 products
Traversal<List<Product>, Product> last2 = ListTraversals.takingLast(2);
List<Product> result = Traversals.modify(last2, p -> p.applyDiscount(0.2), products);
// First 3 unchanged; last 2 get 20% discount
List<Product> lastTwo = Traversals.getAll(last2, products);
// Returns: [Doohickey, Thingamajig]
Step 4: Excluding from the End — droppingLast(int n)
Focus on all elements except the last n:
// Focus on all except last 2
Traversal<List<Product>, Product> exceptLast2 = ListTraversals.droppingLast(2);
List<Product> result = Traversals.modify(exceptLast2, p -> p.applyDiscount(0.05), products);
// First 3 get 5% discount; last 2 unchanged
List<Product> allButLastTwo = Traversals.getAll(exceptLast2, products);
// Returns: [Widget, Gadget, Gizmo]
Step 5: Precise Slicing — slicing(int from, int to)
Focus on elements within a half-open range [from, to), exactly like List.subList():
// Focus on indices 1, 2, 3 (0-indexed, exclusive end)
Traversal<List<Product>, Product> slice = ListTraversals.slicing(1, 4);
List<Product> result = Traversals.modify(slice, p -> p.applyDiscount(0.12), products);
// Index 0 unchanged; indices 1-3 discounted; index 4 unchanged
List<Product> sliced = Traversals.getAll(slice, products);
// Returns: [Gadget, Gizmo, Doohickey]
Predicate-Based Focusing: Beyond Fixed Indices
Whilst index-based limiting is powerful, many real-world scenarios require conditional focusing—stopping when a condition is met rather than at a fixed position. ListTraversals provides three predicate-based methods that complement the fixed-index approaches:
| Method | Description | Use Case |
|---|---|---|
takingWhile(Predicate) | Focus on longest prefix where predicate holds | Processing ordered data until threshold |
droppingWhile(Predicate) | Skip prefix whilst predicate holds | Ignoring header/preamble sections |
element(int) | Focus on single element at index (0-1 cardinality) | Safe indexed access without exceptions |
These methods enable runtime-determined focusing—the number of elements in focus depends on the data itself, not a predetermined count.
Step 6: Conditional Prefix with takingWhile(Predicate)
The takingWhile() method focuses on the longest prefix of elements satisfying a predicate. Once an element fails the test, traversal stops—even if later elements would pass.
// Focus on products whilst price < 20
Traversal<List<Product>, Product> affordablePrefix =
ListTraversals.takingWhile(p -> p.price() < 20.0);
List<Product> products = List.of(
new Product("SKU001", "Widget", 10.0, 100),
new Product("SKU002", "Gadget", 15.0, 50),
new Product("SKU003", "Gizmo", 25.0, 75), // Stops here
new Product("SKU004", "Thing", 12.0, 25) // Not included despite < 20
);
// Apply discount only to initial affordable items
List<Product> result = Traversals.modify(
affordablePrefix,
p -> p.applyDiscount(0.1),
products
);
// Widget and Gadget discounted; Gizmo and Thing unchanged
// Extract the affordable prefix
List<Product> affordable = Traversals.getAll(affordablePrefix, products);
// Returns: [Widget, Gadget] (stops at first expensive item)
Key Semantic: Unlike filtered(), which tests all elements, takingWhile() is sequential and prefix-oriented. It's the optics equivalent of Stream's takeWhile().
Real-World Use Cases:
- Time-series data: Process events before a timestamp threshold
- Sorted lists: Extract items below a value boundary
- Log processing: Capture startup messages before first error
- Priority queues: Handle high-priority items before switching logic
// Time-series: Process transactions before cutoff
LocalDateTime cutoff = LocalDateTime.of(2025, 1, 1, 0, 0);
Traversal<List<Transaction>, Transaction> beforeCutoff =
ListTraversals.takingWhile(t -> t.timestamp().isBefore(cutoff));
List<Transaction> processed = Traversals.modify(
beforeCutoff,
t -> t.withStatus("PROCESSED"),
transactions
);
Step 7: Skipping Prefix with droppingWhile(Predicate)
The droppingWhile() method is the complement to takingWhile()—it skips the prefix whilst the predicate holds, then focuses on all remaining elements.
// Skip low-stock products, focus on well-stocked ones
Traversal<List<Product>, Product> wellStocked =
ListTraversals.droppingWhile(p -> p.stock() < 50);
List<Product> products = List.of(
new Product("SKU001", "Widget", 10.0, 20),
new Product("SKU002", "Gadget", 25.0, 30),
new Product("SKU003", "Gizmo", 15.0, 75), // First to pass
new Product("SKU004", "Thing", 12.0, 25) // Included despite < 50
);
// Restock only well-stocked items (and everything after)
List<Product> restocked = Traversals.modify(
wellStocked,
p -> new Product(p.sku(), p.name(), p.price(), p.stock() + 50),
products
);
// Widget and Gadget unchanged; Gizmo and Thing restocked
List<Product> focused = Traversals.getAll(wellStocked, products);
// Returns: [Gizmo, Thing]
Real-World Use Cases:
- Skipping headers: Process CSV data after metadata rows
- Log analysis: Ignore initialisation messages, focus on runtime
- Pagination: Skip already-processed records in batch jobs
- Protocol parsing: Discard handshake, process payload
// Skip configuration lines in log file
Traversal<String, String> runtimeLogs =
StringTraversals.lined()
.filtered(line -> !line.startsWith("[CONFIG]"));
// Apply to log data
String logs = "[CONFIG] Database URL\n[CONFIG] Port\nINFO: System started\nERROR: Connection failed";
String result = Traversals.modify(runtimeLogs, String::toUpperCase, logs);
// Result: "[CONFIG] Database URL\n[CONFIG] Port\nINFO: SYSTEM STARTED\nERROR: CONNECTION FAILED"
Step 8: Single Element Access with element(int)
The element() method creates an affine traversal (0-1 cardinality) focusing on a single element at the given index. Unlike direct array access, it never throws IndexOutOfBoundsException.
// Focus on element at index 2
Traversal<List<Product>, Product> thirdProduct = ListTraversals.element(2);
List<Product> products = List.of(
new Product("SKU001", "Widget", 10.0, 100),
new Product("SKU002", "Gadget", 25.0, 50),
new Product("SKU003", "Gizmo", 15.0, 75)
);
// Modify only the third product
List<Product> updated = Traversals.modify(
thirdProduct,
p -> p.applyDiscount(0.2),
products
);
// Only Gizmo discounted
// Extract the element (if present)
List<Product> element = Traversals.getAll(thirdProduct, products);
// Returns: [Gizmo]
// Out of bounds: gracefully returns empty
List<Product> outOfBounds = Traversals.getAll(
ListTraversals.element(10),
products
);
// Returns: [] (no exception)
When to Use element() vs Ixed:
element(): For composition with other traversals, when index is known at construction timeIxed: For dynamic indexed access, more general type class approach
// Compose element() with nested structures
Traversal<List<List<Product>>, Product> secondListThirdProduct =
ListTraversals.element(1) // Second list
.andThen(ListTraversals.element(2)); // Third product in that list
// Ixed for dynamic access
IxedInstances.listIxed().ix(userProvidedIndex).getOptional(products);
Combining Predicate-Based and Index-Based Traversals
The real power emerges when mixing approaches:
// Take first 10 products where stock > 0, then filter by price
Traversal<List<Product>, Product> topAffordableInStock =
ListTraversals.taking(10)
.andThen(ListTraversals.takingWhile(p -> p.stock() > 0))
.filtered(p -> p.price() < 30.0);
// Skip warmup period, then take next 100 events
Traversal<List<Event>, Event> steadyState =
ListTraversals.droppingWhile(e -> e.isWarmup())
.andThen(ListTraversals.taking(100));
Edge Case Handling
All limiting traversal methods handle edge cases gracefully and consistently:
| Edge Case | Behaviour | Rationale |
|---|---|---|
n < 0 | Treated as 0 (identity traversal) | Graceful degradation, no exceptions |
n > list.size() | Clamped to list bounds | Focus on all available elements |
| Empty list | Returns empty list unchanged | No elements to focus on |
from >= to in slicing | Identity traversal (no focus) | Empty range semantics |
Negative from in slicing | Clamped to 0 | Start from beginning |
// Examples of edge case handling
List<Integer> numbers = List.of(1, 2, 3);
// n > size: focuses on all elements
List<Integer> result1 = Traversals.getAll(ListTraversals.taking(100), numbers);
// Returns: [1, 2, 3]
// Negative n: identity (no focus)
List<Integer> result2 = Traversals.getAll(ListTraversals.taking(-5), numbers);
// Returns: []
// Inverted range: no focus
List<Integer> result3 = Traversals.getAll(ListTraversals.slicing(3, 1), numbers);
// Returns: []
// Empty list: safe operation
List<Integer> result4 = Traversals.modify(ListTraversals.taking(3), x -> x * 2, List.of());
// Returns: []
This philosophy ensures no runtime exceptions from index bounds, making limiting traversals safe for dynamic data.
Composing Limiting Traversals
The real power emerges when you compose limiting traversals with other optics:
With Lenses — Deep Updates
Traversal<List<Product>, Product> first5 = ListTraversals.taking(5);
Lens<Product, Double> priceLens = ProductLenses.price();
// Compose: first 5 products → their prices
Traversal<List<Product>, Double> first5Prices =
first5.andThen(priceLens.asTraversal());
// Increase prices of first 5 products by 10%
List<Product> result = Traversals.modify(first5Prices, price -> price * 1.1, products);
With Filtered Traversals — Conditional Slicing
// First 10 products that are also low stock
Traversal<List<Product>, Product> first10LowStock =
ListTraversals.taking(10).filtered(p -> p.stock() < 50);
// Restock only first 10 low-stock products
List<Product> restocked = Traversals.modify(
first10LowStock,
p -> new Product(p.sku(), p.name(), p.price(), p.stock() + 100),
products
);
With Nested Structures — Batch Processing
// Focus on first 50 orders
Traversal<List<Order>, Order> first50Orders = ListTraversals.taking(50);
// Focus on all line items in those orders
Traversal<List<Order>, LineItem> first50OrderItems =
first50Orders.andThen(OrderTraversals.items());
// Apply bulk discount to items in first 50 orders
List<Order> processed = Traversals.modify(
first50OrderItems,
item -> new LineItem(item.product().applyDiscount(0.05), item.quantity()),
orders
);
When to Use Limiting Traversals vs Other Approaches
Use Limiting Traversals When:
- Positional focus - You need to operate on elements by index position
- Structural preservation - Non-focused elements must remain in the list
- Composable pipelines - Building complex optic chains with lenses and prisms
- Immutable updates - Transforming portions whilst keeping data immutable
- Reusable logic - Define once, compose everywhere
// Perfect: Declarative, composable, reusable
Traversal<Catalogue, Double> first10Prices =
CatalogueLenses.products().asTraversal()
.andThen(ListTraversals.taking(10))
.andThen(ProductLenses.price().asTraversal());
Catalogue updated = Traversals.modify(first10Prices, p -> p * 0.9, catalogue);
Use Stream API When:
- Terminal operations - Counting, finding, collecting to new structures
- Complex transformations - Multiple chained operations with sorting/grouping
- No structural preservation needed - You're extracting data, not updating in place
- Performance-critical paths - Minimal abstraction overhead
// Better with streams: Complex aggregation
int totalStock = products.stream()
.limit(100)
.mapToInt(Product::stock)
.sum();
Use Manual Loops When:
- Early termination with side effects - Need to break out of loop
- Index-dependent logic - Processing depends on knowing the exact index
- Imperative control flow - Complex branching based on position
// Sometimes explicit indexing is clearest
for (int i = 0; i < Math.min(10, products.size()); i++) {
if (products.get(i).stock() == 0) {
notifyOutOfStock(products.get(i), i);
break;
}
}
Common Pitfalls
❌ Don't Do This:
// Inefficient: Recreating traversals in loops
for (int page = 0; page < totalPages; page++) {
var slice = ListTraversals.slicing(page * 10, (page + 1) * 10);
processPage(Traversals.getAll(slice, products));
}
// Confusing: Mixing with Stream operations unnecessarily
List<Product> result = Traversals.getAll(ListTraversals.taking(5), products)
.stream()
.limit(3) // Why limit again? Already took 5!
.collect(toList());
// Wrong expectation: Thinking it removes elements
Traversal<List<Product>, Product> first3 = ListTraversals.taking(3);
List<Product> modified = Traversals.modify(first3, Product::applyDiscount, products);
// modified.size() == products.size()! Structure preserved, not truncated
// Over-engineering: Using slicing for single element
Traversal<List<Product>, Product> atIndex5 = ListTraversals.slicing(5, 6);
// Consider using Ixed type class for single-element access instead
✅ Do This Instead:
// Efficient: Create traversal once, vary parameters
Traversal<List<Product>, Product> takeN(int n) {
return ListTraversals.taking(n);
}
// Or store commonly used ones as constants
static final Traversal<List<Product>, Product> FIRST_PAGE = ListTraversals.taking(10);
// Clear: Keep operations at appropriate abstraction level
List<Product> firstFive = Traversals.getAll(ListTraversals.taking(5), products);
// If you need further processing, do it separately
// Correct expectation: Use getAll for extraction, modify for transformation
List<Product> onlyFirst5 = Traversals.getAll(first5, products); // Extracts subset
List<Product> allWithFirst5Updated = Traversals.modify(first5, p -> p.applyDiscount(0.1), products); // Updates in place
// Right tool: Use Ixed for single indexed access
Optional<Product> fifth = IxedInstances.listIxed().ix(4).getOptional(products);
Performance Notes
Limiting traversals are optimised for efficiency:
- Single pass: No intermediate list creation—slicing happens during traversal
- Structural sharing: Unchanged portions of the list are reused, not copied
- Lazy bounds checking: Index calculations are minimal and performed once
- No boxing overhead: Direct list operations without stream intermediaries
- Composable without penalty: Chaining with other optics adds no extra iteration
Best Practice: Store frequently-used limiting traversals as constants:
public class CatalogueOptics {
// Pagination constants
public static final int PAGE_SIZE = 20;
public static Traversal<List<Product>, Product> page(int pageNum) {
return ListTraversals.slicing(pageNum * PAGE_SIZE, (pageNum + 1) * PAGE_SIZE);
}
// Featured products (first 5)
public static final Traversal<Catalogue, Product> FEATURED =
CatalogueLenses.products().asTraversal()
.andThen(ListTraversals.taking(5));
// Latest additions (last 10)
public static final Traversal<Catalogue, Product> LATEST =
CatalogueLenses.products().asTraversal()
.andThen(ListTraversals.takingLast(10));
// Exclude promotional items at end
public static final Traversal<Catalogue, Product> NON_PROMOTIONAL =
CatalogueLenses.products().asTraversal()
.andThen(ListTraversals.droppingLast(3));
}
Real-World Example: E-Commerce Pagination
Here's a comprehensive example demonstrating limiting traversals in a business context:
package org.higherkindedj.example.optics;
import org.higherkindedj.optics.*;
import org.higherkindedj.optics.util.*;
import java.util.*;
public class PaginationExample {
public record Product(String sku, String name, double price, boolean featured) {
Product applyDiscount(double pct) {
return new Product(sku, name, price * (1 - pct), featured);
}
}
public static void main(String[] args) {
List<Product> catalogue = createCatalogue();
System.out.println("=== E-COMMERCE PAGINATION WITH LIMITING TRAVERSALS ===\n");
// --- Scenario 1: Basic Pagination ---
System.out.println("--- Scenario 1: Paginated Product Display ---");
int pageSize = 3;
int totalPages = (int) Math.ceil(catalogue.size() / (double) pageSize);
for (int page = 0; page < totalPages; page++) {
Traversal<List<Product>, Product> pageTraversal =
ListTraversals.slicing(page * pageSize, (page + 1) * pageSize);
List<Product> pageProducts = Traversals.getAll(pageTraversal, catalogue);
System.out.printf("Page %d: %s%n", page + 1,
pageProducts.stream().map(Product::name).toList());
}
// --- Scenario 2: Featured Products ---
System.out.println("\n--- Scenario 2: Featured Products (First 3) ---");
Traversal<List<Product>, Product> featured = ListTraversals.taking(3);
List<Product> featuredProducts = Traversals.getAll(featured, catalogue);
featuredProducts.forEach(p ->
System.out.printf(" ⭐ %s - £%.2f%n", p.name(), p.price()));
// --- Scenario 3: Apply Discount to Featured ---
System.out.println("\n--- Scenario 3: 10% Discount on Featured ---");
List<Product> withDiscount = Traversals.modify(featured, p -> p.applyDiscount(0.1), catalogue);
System.out.println("After discount on first 3:");
withDiscount.forEach(p -> System.out.printf(" %s: £%.2f%n", p.name(), p.price()));
// --- Scenario 4: Exclude Last Items ---
System.out.println("\n--- Scenario 4: All Except Last 2 (Clearance) ---");
Traversal<List<Product>, Product> nonClearance = ListTraversals.droppingLast(2);
List<Product> regularStock = Traversals.getAll(nonClearance, catalogue);
System.out.println("Regular stock: " + regularStock.stream().map(Product::name).toList());
System.out.println("\n=== PAGINATION COMPLETE ===");
}
private static List<Product> createCatalogue() {
return List.of(
new Product("SKU001", "Laptop", 999.99, true),
new Product("SKU002", "Mouse", 29.99, false),
new Product("SKU003", "Keyboard", 79.99, true),
new Product("SKU004", "Monitor", 349.99, true),
new Product("SKU005", "Webcam", 89.99, false),
new Product("SKU006", "Headset", 149.99, false),
new Product("SKU007", "USB Hub", 39.99, false),
new Product("SKU008", "Desk Lamp", 44.99, false)
);
}
}
Expected Output:
=== E-COMMERCE PAGINATION WITH LIMITING TRAVERSALS ===
--- Scenario 1: Paginated Product Display ---
Page 1: [Laptop, Mouse, Keyboard]
Page 2: [Monitor, Webcam, Headset]
Page 3: [USB Hub, Desk Lamp]
--- Scenario 2: Featured Products (First 3) ---
⭐ Laptop - £999.99
⭐ Mouse - £29.99
⭐ Keyboard - £79.99
--- Scenario 3: 10% Discount on Featured ---
After discount on first 3:
Laptop: £899.99
Mouse: £26.99
Keyboard: £71.99
Monitor: £349.99
Webcam: £89.99
Headset: £149.99
USB Hub: £39.99
Desk Lamp: £44.99
--- Scenario 4: All Except Last 2 (Clearance) ---
Regular stock: [Laptop, Mouse, Keyboard, Monitor, Webcam, Headset]
=== PAGINATION COMPLETE ===
The Relationship to Functional Programming Libraries
For those familiar with functional programming, higher-kinded-j's limiting traversals are inspired by similar patterns in:
Haskell's Lens Library
The Control.Lens.Traversal module provides:
taking :: Int -> Traversal' [a] a
dropping :: Int -> Traversal' [a] a
These create traversals that focus on the first/remaining elements—exactly what our ListTraversals.taking() and dropping() do.
Scala's Monocle Library
Monocle provides similar index-based optics:
import monocle.function.Index._
// Focus on element at index
val atIndex: Optional[List[A], A] = index(3)
// Take first n (via custom combinator)
val firstN: Traversal[List[A], A] = ...
Key Differences in Higher-Kinded-J
- Explicit Applicative instances rather than implicit type class resolution
- Java's type system requires more explicit composition steps
- Additional methods like
takingLastanddroppingLastnot standard in Haskell lens - Edge case handling follows Java conventions (no exceptions, graceful clamping)
Further Reading:
- Haskell Lens Tutorial - Original inspiration for optics
- Optics By Example by Chris Penner - Comprehensive book on optics in Haskell
- Monocle Documentation - Scala optics library with similar patterns
- Java Stream API - Comparison with
limit()andskip()
Summary: The Power of Limiting Traversals
Limiting traversals bring positional focus into the heart of your optic compositions:
taking(n): Focus on first n elementsdropping(n): Skip first n, focus on resttakingLast(n): Focus on last n elementsdroppingLast(n): Focus on all except last nslicing(from, to): Focus on index range [from, to)
These tools transform how you work with list portions in immutable data structures:
| Before (Imperative) | After (Declarative) |
|---|---|
Manual subList() with bounds checking | Single limiting traversal |
| Index manipulation breaking composition | Positional focus as part of optic chain |
| Explicit list reconstruction | Automatic structural preservation |
| Mix of "what" and "how" | Pure expression of intent |
By incorporating limiting traversals into your toolkit, you gain:
- Expressiveness: Say "first 10 products" once, compose with other optics
- Safety: No
IndexOutOfBoundsException—graceful edge case handling - Composability: Chain with lenses, prisms, filtered traversals seamlessly
- Immutability: Structure preserved, only focused elements transformed
- Clarity: Business logic separate from index arithmetic
Limiting traversals represent the natural evolution of optics for list manipulation—where Stream's limit() and skip() meet the composable, type-safe world of functional optics, all whilst maintaining full referential transparency and structural preservation.
Previous: Filtered Optics: Predicate-Based Composition Next: String Traversals: Declarative Text Processing
Getters: A Practical Guide
Composable Read-Only Access
- How to extract values from structures using composable, read-only optics
- Using
@GenerateGettersto create type-safe value extractors automatically - Understanding the relationship between Getter and Fold
- Creating computed and derived values without storing them
- Composing Getters with other optics for deep data extraction
- Factory methods:
of,to,constant,identity,first,second - Null-safe navigation with
getMaybefor functional optional handling - When to use Getter vs Lens vs direct field access
- Building data transformation pipelines with clear read-only intent
In previous guides, we explored Fold for querying zero or more elements from a structure. But what if you need to extract exactly one value? What if you want a composable accessor for a single, guaranteed-to-exist value? This is where Getter excels.
A Getter is the simplest read-only optic—it extracts precisely one value from a source. Think of it as a function wrapped in optic form, enabling composition with other optics whilst maintaining read-only semantics.
The Scenario: Employee Reporting System
Consider a corporate reporting system where you need to extract various pieces of information from employee records:
The Data Model:
@GenerateGetters
public record Person(String firstName, String lastName, int age, Address address) {}
@GenerateGetters
public record Address(String street, String city, String zipCode, String country) {}
@GenerateGetters
public record Company(String name, Person ceo, List<Person> employees, Address headquarters) {}
Common Extraction Needs:
- "Get the CEO's full name"
- "Extract the company's headquarters city"
- "Calculate the CEO's age group"
- "Generate an employee's email address"
- "Compute the length of a person's full name"
A Getter makes these extractions type-safe, composable, and expressive.
Think of Getters Like...
- A functional accessor 📖: Extracting a specific value from a container
- A read-only lens 🔍: Focusing on one element without modification capability
- A computed property 🧮: Deriving values on-the-fly without storage
- A data pipeline stage 🔗: Composable extraction steps
- A pure function in optic form λ: Wrapping functions for composition
Getter vs Lens vs Fold: Understanding the Differences
| Aspect | Getter | Lens | Fold |
|---|---|---|---|
| Focus | Exactly one element | Exactly one element | Zero or more elements |
| Can modify? | ❌ No | ✅ Yes | ❌ No |
| Core operation | get(source) | get(source), set(value, source) | foldMap(monoid, fn, source) |
| Use case | Computed/derived values | Field access with updates | Queries over collections |
| Intent | "Extract this single value" | "Get or set this field" | "Query all these values" |
Key Insight: Every Lens can be viewed as a Getter (its read-only half), but not every Getter can be a Lens. A Getter extends Fold, meaning it inherits all query operations (exists, all, find, preview) whilst guaranteeing exactly one focused element.
A Step-by-Step Walkthrough
Step 1: Creating Getters
Using @GenerateGetters Annotation
Annotating a record with @GenerateGetters creates a companion class (e.g., PersonGetters) containing a Getter for each field:
import org.higherkindedj.optics.annotations.GenerateGetters;
@GenerateGetters
public record Person(String firstName, String lastName, int age, Address address) {}
This generates:
PersonGetters.firstName()→Getter<Person, String>PersonGetters.lastName()→Getter<Person, String>PersonGetters.age()→Getter<Person, Integer>PersonGetters.address()→Getter<Person, Address>
Plus convenience methods:
PersonGetters.getFirstName(person)→StringPersonGetters.getLastName(person)→String- etc.
Using Factory Methods
Create Getters programmatically for computed or derived values:
// Simple field extraction
Getter<Person, String> firstName = Getter.of(Person::firstName);
// Computed value
Getter<Person, String> fullName = Getter.of(p -> p.firstName() + " " + p.lastName());
// Derived value
Getter<Person, String> initials = Getter.of(p ->
p.firstName().charAt(0) + "." + p.lastName().charAt(0) + ".");
// Alternative factory (alias for of)
Getter<String, Integer> stringLength = Getter.to(String::length);
Step 2: Core Getter Operations
get(source): Extract the Focused Value
The fundamental operation—returns exactly one value:
Person person = new Person("Jane", "Smith", 45, address);
Getter<Person, String> fullName = Getter.of(p -> p.firstName() + " " + p.lastName());
String name = fullName.get(person);
// Result: "Jane Smith"
Getter<Person, Integer> age = Getter.of(Person::age);
int years = age.get(person);
// Result: 45
Step 3: Composing Getters
Chain Getters together to extract deeply nested values:
Getter<Person, Address> addressGetter = Getter.of(Person::address);
Getter<Address, String> cityGetter = Getter.of(Address::city);
// Compose: Person → Address → String
Getter<Person, String> personCity = addressGetter.andThen(cityGetter);
Person person = new Person("Jane", "Smith", 45,
new Address("123 Main St", "London", "EC1A", "UK"));
String city = personCity.get(person);
// Result: "London"
Deep Composition Chain
Getter<Company, Person> ceoGetter = Getter.of(Company::ceo);
Getter<Person, String> fullNameGetter = Getter.of(p -> p.firstName() + " " + p.lastName());
Getter<String, Integer> lengthGetter = Getter.of(String::length);
// Compose: Company → Person → String → Integer
Getter<Company, Integer> ceoNameLength = ceoGetter
.andThen(fullNameGetter)
.andThen(lengthGetter);
Company company = new Company("TechCorp", ceo, employees, headquarters);
int length = ceoNameLength.get(company);
// Result: 10 (length of "Jane Smith")
Step 4: Getter as a Fold
Since Getter extends Fold, you inherit all query operations—but they operate on exactly one element:
Getter<Person, Integer> ageGetter = Getter.of(Person::age);
Person person = new Person("Jane", "Smith", 45, address);
// preview() returns Optional with the single value
Optional<Integer> age = ageGetter.preview(person);
// Result: Optional[45]
// getAll() returns a single-element list
List<Integer> ages = ageGetter.getAll(person);
// Result: [45]
// exists() checks if the single value matches
boolean isExperienced = ageGetter.exists(a -> a > 40, person);
// Result: true
// all() checks the single value (always same as exists for Getter)
boolean isSenior = ageGetter.all(a -> a >= 65, person);
// Result: false
// find() returns the value if it matches
Optional<Integer> foundAge = ageGetter.find(a -> a > 30, person);
// Result: Optional[45]
// length() always returns 1 for Getter
int count = ageGetter.length(person);
// Result: 1
// isEmpty() always returns false for Getter
boolean empty = ageGetter.isEmpty(person);
// Result: false
Step 5: Combining Getters with Folds
Compose Getters with Folds for powerful queries:
Getter<Company, List<Person>> employeesGetter = Getter.of(Company::employees);
Fold<List<Person>, Person> listFold = Fold.of(list -> list);
Getter<Person, String> fullNameGetter = Getter.of(p -> p.firstName() + " " + p.lastName());
// Company → List<Person> → Person (multiple) → String
Fold<Company, String> allEmployeeNames = employeesGetter
.asFold() // Convert Getter to Fold
.andThen(listFold)
.andThen(fullNameGetter.asFold());
List<String> names = allEmployeeNames.getAll(company);
// Result: ["John Doe", "Alice Johnson", "Bob Williams"]
boolean hasExperienced = listFold.andThen(Getter.of(Person::age).asFold())
.exists(age -> age > 40, employees);
// Result: depends on employee ages
Step 6: Maybe-Based Getter Extension
Higher-kinded-j provides the getMaybe extension method that integrates Getter with the Maybe type, enabling null-safe navigation through potentially nullable fields. This extension is available via static import from GetterExtensions.
The Challenge: Null-Safe Navigation
When working with nested data structures, intermediate values may be null, leading to NullPointerException if not handled carefully. Traditional approaches require verbose null checks at each level:
// Verbose traditional approach with null checks
Person person = company.getCeo();
if (person != null) {
Address address = person.getAddress();
if (address != null) {
String city = address.getCity();
if (city != null) {
System.out.println("City: " + city);
}
}
}
The getMaybe extension method provides a more functional approach by wrapping extracted values in Maybe, which explicitly models presence or absence without the risk of NPE.
Think of getMaybe Like...
- A safe elevator - Transports you to the desired floor, or tells you it's unavailable
- A null-safe wrapper - Extracts values whilst protecting against null
- Optional's functional cousin - Same safety guarantees, better functional composition
- A maybe-monad extractor - Lifts extraction into the Maybe context
How getMaybe Works
The getMaybe static method is imported from GetterExtensions:
import static org.higherkindedj.optics.extensions.GetterExtensions.getMaybe;
Signature:
public static <S, A> Maybe<A> getMaybe(Getter<S, A> getter, S source)
It extracts a value using the provided Getter and wraps it in Maybe:
- If the extracted value is non-null, returns
Just(value) - If the extracted value is null, returns
Nothing
Basic Usage Example
import org.higherkindedj.optics.Getter;
import org.higherkindedj.hkt.maybe.Maybe;
import static org.higherkindedj.optics.extensions.GetterExtensions.getMaybe;
public record Person(String firstName, String lastName, Address address) {}
public record Address(String street, String city) {}
Getter<Person, String> firstNameGetter = Getter.of(Person::firstName);
Getter<Person, Address> addressGetter = Getter.of(Person::address);
Person person = new Person("Jane", "Smith", address);
// Extract non-null value
Maybe<String> name = getMaybe(firstNameGetter, person);
// Result: Just("Jane")
// Extract nullable value
Person personWithNullAddress = new Person("Bob", "Jones", null);
Maybe<Address> address = getMaybe(addressGetter, personWithNullAddress);
// Result: Nothing
Safe Navigation with Composed Getters
The real power of getMaybe emerges when navigating nested structures with potentially null intermediate values. By using flatMap, you can safely chain extractions:
Getter<Person, Address> addressGetter = Getter.of(Person::address);
Getter<Address, String> cityGetter = Getter.of(Address::city);
// Safe navigation: Person → Maybe<Address> → Maybe<String>
Person personWithAddress = new Person("Jane", "Smith",
new Address("123 Main St", "London"));
Maybe<String> city = getMaybe(addressGetter, personWithAddress)
.flatMap(addr -> getMaybe(cityGetter, addr));
// Result: Just("London")
// Safe with null intermediate
Person personWithNullAddress = new Person("Bob", "Jones", null);
Maybe<String> noCity = getMaybe(addressGetter, personWithNullAddress)
.flatMap(addr -> getMaybe(cityGetter, addr));
// Result: Nothing (safely handles null address)
Key Pattern: Use flatMap to chain getMaybe calls, creating a null-safe pipeline.
Comparison: Direct Access vs getMaybe
Understanding when to use each approach:
| Approach | Null Safety | Composability | Verbosity | Use Case |
|---|---|---|---|---|
| Direct field access | ❌ NPE risk | ❌ No | ✅ Minimal | Known non-null values |
| Manual null checks | ✅ Safe | ❌ No | ❌ Very verbose | Simple cases |
| Optional chaining | ✅ Safe | ⚠️ Limited | ⚠️ Moderate | Java interop |
| getMaybe | ✅ Safe | ✅ Excellent | ✅ Concise | Functional pipelines |
Example Comparison:
// Direct access (risky)
String city1 = person.address().city(); // NPE if address is null!
// Manual null checks (verbose)
String city2 = null;
if (person.address() != null && person.address().city() != null) {
city2 = person.address().city();
}
// Optional chaining (better)
Optional<String> city3 = Optional.ofNullable(person.address())
.map(Address::city);
// getMaybe (best for functional code)
Maybe<String> city4 = getMaybe(addressGetter, person)
.flatMap(addr -> getMaybe(cityGetter, addr));
Integration with Maybe Operations
Once you've extracted a value into Maybe, you can leverage the full power of monadic operations:
Getter<Person, Address> addressGetter = Getter.of(Person::address);
Getter<Address, String> cityGetter = Getter.of(Address::city);
Person person = new Person("Jane", "Smith",
new Address("123 Main St", "London"));
// Extract and transform
Maybe<String> uppercaseCity = getMaybe(addressGetter, person)
.flatMap(addr -> getMaybe(cityGetter, addr))
.map(String::toUpperCase);
// Result: Just("LONDON")
// Extract with default
String cityOrDefault = getMaybe(addressGetter, person)
.flatMap(addr -> getMaybe(cityGetter, addr))
.getOrElse("Unknown");
// Result: "London"
// Extract and filter
Maybe<String> longCityName = getMaybe(addressGetter, person)
.flatMap(addr -> getMaybe(cityGetter, addr))
.filter(name -> name.length() > 5);
// Result: Just("London") (length is 6)
// Chain multiple operations
String report = getMaybe(addressGetter, person)
.flatMap(addr -> getMaybe(cityGetter, addr))
.map(city -> "Person lives in " + city)
.getOrElse("Address unknown");
// Result: "Person lives in London"
When to Use getMaybe
Use getMaybe when:
- Navigating through potentially null intermediate values
- Building functional pipelines with Maybe-based operations
- You want explicit presence/absence semantics
- Composing with other Maybe-returning functions
- Working within HKT-based abstractions
// Perfect for null-safe navigation
Maybe<String> safeCity = getMaybe(addressGetter, person)
.flatMap(addr -> getMaybe(cityGetter, addr));
Use standard get() when:
- You know the values are non-null
- You're working in performance-critical code
- You want immediate NPE on unexpected nulls (fail-fast)
// Fine when values are guaranteed non-null
String knownCity = cityGetter.get(knownAddress);
Use Getter.preview() when:
- You prefer Java's
Optionalfor interoperability - Working at API boundaries with standard Java code
// Good for Java interop
Optional<String> optionalCity = cityGetter.preview(address);
Real-World Scenario: Employee Profile Lookup
Here's a practical example showing how getMaybe simplifies complex null-safe extractions:
import org.higherkindedj.optics.Getter;
import org.higherkindedj.hkt.maybe.Maybe;
import static org.higherkindedj.optics.extensions.GetterExtensions.getMaybe;
public record Employee(String id, PersonalInfo personalInfo) {}
public record PersonalInfo(ContactInfo contactInfo, EmergencyContact emergencyContact) {}
public record ContactInfo(String email, String phone, Address address) {}
public record EmergencyContact(String name, String phone) {}
public record Address(String street, String city, String postcode) {}
public class EmployeeService {
private static final Getter<Employee, PersonalInfo> PERSONAL_INFO =
Getter.of(Employee::personalInfo);
private static final Getter<PersonalInfo, ContactInfo> CONTACT_INFO =
Getter.of(PersonalInfo::contactInfo);
private static final Getter<ContactInfo, Address> ADDRESS =
Getter.of(ContactInfo::address);
private static final Getter<Address, String> CITY =
Getter.of(Address::city);
// Extract employee city with full null safety
public Maybe<String> getEmployeeCity(Employee employee) {
return getMaybe(PERSONAL_INFO, employee)
.flatMap(info -> getMaybe(CONTACT_INFO, info))
.flatMap(contact -> getMaybe(ADDRESS, contact))
.flatMap(addr -> getMaybe(CITY, addr));
}
// Generate location-based welcome message
public String generateWelcomeMessage(Employee employee) {
return getEmployeeCity(employee)
.map(city -> "Welcome to our " + city + " office!")
.getOrElse("Welcome to our company!");
}
// Check if employee is in specific city
public boolean isEmployeeInCity(Employee employee, String targetCity) {
return getEmployeeCity(employee)
.filter(city -> city.equalsIgnoreCase(targetCity))
.isJust();
}
// Collect all cities from employee list (skipping unknowns)
public List<String> getAllCities(List<Employee> employees) {
return employees.stream()
.map(this::getEmployeeCity)
.filter(Maybe::isJust)
.map(Maybe::get)
.distinct()
.toList();
}
// Get city or fallback to emergency contact location
public String getAnyCityInfo(Employee employee) {
Getter<PersonalInfo, EmergencyContact> emergencyGetter =
Getter.of(PersonalInfo::emergencyContact);
// Try primary address first
Maybe<String> primaryCity = getMaybe(PERSONAL_INFO, employee)
.flatMap(info -> getMaybe(CONTACT_INFO, info))
.flatMap(contact -> getMaybe(ADDRESS, contact))
.flatMap(addr -> getMaybe(CITY, addr));
// If not found, could try emergency contact (simplified example)
return primaryCity.getOrElse("Location unknown");
}
}
Performance Considerations
getMaybe adds minimal overhead:
- One null check: Checks if the extracted value is null
- One Maybe wrapping: Creates
JustorNothinginstance - Same extraction cost: Uses
Getter.get()internally
Optimisation Tip: For performance-critical hot paths where values are guaranteed non-null, use Getter.get() directly. For most business logic, the safety and composability of getMaybe far outweigh the negligible cost.
// Hot path with guaranteed non-null (use direct get)
String fastAccess = nameGetter.get(person);
// Business logic with potential nulls (use getMaybe)
Maybe<String> safeAccess = getMaybe(addressGetter, person)
.flatMap(addr -> getMaybe(cityGetter, addr));
Practical Pattern: Building Maybe-Safe Composed Getters
Create reusable null-safe extraction functions:
public class SafeGetters {
// Create a null-safe composed getter using Maybe
public static <A, B, C> Function<A, Maybe<C>> safePath(
Getter<A, B> first,
Getter<B, C> second
) {
return source -> getMaybe(first, source)
.flatMap(intermediate -> getMaybe(second, intermediate));
}
// Usage example
private static final Function<Person, Maybe<String>> SAFE_CITY_LOOKUP =
safePath(
Getter.of(Person::address),
Getter.of(Address::city)
);
public static void main(String[] args) {
Person person = new Person("Jane", "Smith", null);
Maybe<String> city = SAFE_CITY_LOOKUP.apply(person);
// Result: Nothing (safely handled null address)
}
}
See GetterExtensionsExample.java for a runnable demonstration of getMaybe with practical scenarios.
Built-in Helper Getters
Higher-Kinded-J provides several utility Getters:
identity(): Returns the Source Itself
Getter<String, String> id = Getter.identity();
String result = id.get("Hello");
// Result: "Hello"
Useful as a base case in composition or for type adaptation.
constant(value): Always Returns the Same Value
Getter<String, Integer> always42 = Getter.constant(42);
int result = always42.get("anything");
// Result: 42
Useful for providing default values in pipelines.
first() and second(): Pair Element Extractors
Map.Entry<Person, Address> pair = new AbstractMap.SimpleEntry<>(ceo, hqAddress);
Getter<Map.Entry<Person, Address>, Person> firstGetter = Getter.first();
Getter<Map.Entry<Person, Address>, Address> secondGetter = Getter.second();
Person person = firstGetter.get(pair);
// Result: the CEO Person
Address address = secondGetter.get(pair);
// Result: the headquarters Address
When to Use Getter vs Other Approaches
Use Getter When:
- You need computed or derived values without storing them
- You want composable extraction pipelines
- You're building reporting or analytics features
- You need type-safe accessors that compose with other optics
- You want clear read-only intent in your code
// Good: Computed value without storage overhead
Getter<Person, String> email = Getter.of(p ->
p.firstName().toLowerCase() + "." + p.lastName().toLowerCase() + "@company.com");
// Good: Composable pipeline
Getter<Company, String> ceoCityUppercase = ceoGetter
.andThen(addressGetter)
.andThen(cityGetter)
.andThen(Getter.of(String::toUpperCase));
Use Lens When:
- You need both reading and writing
- You're working with mutable state (functionally)
// Use Lens when you need to modify
Lens<Person, String> firstName = Lens.of(
Person::firstName,
(p, name) -> new Person(name, p.lastName(), p.age(), p.address()));
Person updated = firstName.set("Janet", person);
Use Fold When:
- You're querying zero or more elements
- You need to aggregate or search collections
// Use Fold for collections
Fold<Order, Product> itemsFold = Fold.of(Order::items);
List<Product> all = itemsFold.getAll(order);
Use Direct Field Access When:
- You need maximum performance with no abstraction overhead
- You're not composing with other optics
// Direct access when composition isn't needed
String name = person.firstName();
Real-World Use Cases
Data Transformation Pipelines
Getter<Person, String> email = Getter.of(p ->
p.firstName().toLowerCase() + "." + p.lastName().toLowerCase() + "@techcorp.com");
Getter<Person, String> badgeId = Getter.of(p ->
p.lastName().substring(0, Math.min(3, p.lastName().length())).toUpperCase() +
String.format("%04d", p.age() * 100));
// Generate employee reports
for (Person emp : company.employees()) {
System.out.println("Employee: " + fullName.get(emp));
System.out.println(" Email: " + email.get(emp));
System.out.println(" Badge: " + badgeId.get(emp));
}
Analytics and Reporting
Fold<Company, Person> allEmployees = Fold.of(Company::employees);
Getter<Person, Integer> age = Getter.of(Person::age);
// Calculate total age
int totalAge = allEmployees.andThen(age.asFold())
.foldMap(sumMonoid(), Function.identity(), company);
// Calculate average age
double averageAge = (double) totalAge / company.employees().size();
// Check conditions
boolean allFromUK = allEmployees.andThen(addressGetter.asFold())
.andThen(countryGetter.asFold())
.all(c -> c.equals("UK"), company);
API Response Mapping
// Extract specific fields from nested API responses
Getter<ApiResponse, User> userGetter = Getter.of(ApiResponse::user);
Getter<User, Profile> profileGetter = Getter.of(User::profile);
Getter<Profile, String> displayName = Getter.of(Profile::displayName);
Getter<ApiResponse, String> userName = userGetter
.andThen(profileGetter)
.andThen(displayName);
String name = userName.get(response);
Common Pitfalls
❌ Don't Use Getter When You Need to Modify
// Wrong: Getter can't modify
Getter<Person, String> nameGetter = Getter.of(Person::firstName);
// nameGetter.set("Jane", person); // Compilation error - no set method!
✅ Use Lens When Modification Is Required
// Correct: Use Lens for read-write access
Lens<Person, String> nameLens = Lens.of(Person::firstName, (p, n) ->
new Person(n, p.lastName(), p.age(), p.address()));
Person updated = nameLens.set("Jane", person);
❌ Don't Overlook Null Safety
// Risky: Getter doesn't handle null values specially
Getter<NullableRecord, String> getter = Getter.of(NullableRecord::value);
String result = getter.get(new NullableRecord(null)); // Returns null
✅ Handle Nulls Explicitly
// Safe: Handle nulls in the getter function
Getter<NullableRecord, String> safeGetter = Getter.of(r ->
r.value() != null ? r.value() : "default");
Performance Considerations
Getters are extremely lightweight:
- Zero overhead: Just a function wrapper
- No reflection: Direct method references
- Inline-friendly: JIT can optimise away the abstraction
- Lazy evaluation: Values computed only when
get()is called
Best Practice: Use Getters freely—they add minimal runtime cost whilst providing excellent composability and type safety.
// Efficient: Computed on demand
Getter<Person, String> fullName = Getter.of(p -> p.firstName() + " " + p.lastName());
// No storage overhead, computed each time get() is called
String name1 = fullName.get(person1);
String name2 = fullName.get(person2);
Complete, Runnable Example
import org.higherkindedj.optics.Getter;
import org.higherkindedj.optics.Fold;
import org.higherkindedj.hkt.Monoid;
import java.util.*;
import java.util.function.Function;
public class GetterExample {
public record Person(String firstName, String lastName, int age, Address address) {}
public record Address(String street, String city, String zipCode, String country) {}
public record Company(String name, Person ceo, List<Person> employees, Address headquarters) {}
public static void main(String[] args) {
// Create sample data
Address ceoAddress = new Address("123 Executive Blvd", "London", "EC1A", "UK");
Person ceo = new Person("Jane", "Smith", 45, ceoAddress);
List<Person> employees = List.of(
new Person("John", "Doe", 30, new Address("456 Oak St", "Manchester", "M1", "UK")),
new Person("Alice", "Johnson", 28, new Address("789 Elm Ave", "Birmingham", "B1", "UK")),
new Person("Bob", "Williams", 35, new Address("321 Pine Rd", "Leeds", "LS1", "UK"))
);
Address hqAddress = new Address("1000 Corporate Way", "London", "EC2A", "UK");
Company company = new Company("TechCorp", ceo, employees, hqAddress);
// === Basic Getters ===
Getter<Person, String> fullName = Getter.of(p -> p.firstName() + " " + p.lastName());
Getter<Person, Integer> age = Getter.of(Person::age);
System.out.println("CEO: " + fullName.get(ceo));
System.out.println("CEO Age: " + age.get(ceo));
// === Computed Values ===
Getter<Person, String> initials = Getter.of(p ->
p.firstName().charAt(0) + "." + p.lastName().charAt(0) + ".");
Getter<Person, String> email = Getter.of(p ->
p.firstName().toLowerCase() + "." + p.lastName().toLowerCase() + "@techcorp.com");
System.out.println("CEO Initials: " + initials.get(ceo));
System.out.println("CEO Email: " + email.get(ceo));
// === Composition ===
Getter<Person, Address> addressGetter = Getter.of(Person::address);
Getter<Address, String> cityGetter = Getter.of(Address::city);
Getter<Company, Person> ceoGetter = Getter.of(Company::ceo);
Getter<Person, String> personCity = addressGetter.andThen(cityGetter);
Getter<Company, String> companyCeoCity = ceoGetter.andThen(personCity);
System.out.println("CEO City: " + personCity.get(ceo));
System.out.println("Company CEO City: " + companyCeoCity.get(company));
// === Getter as Fold ===
Optional<Integer> ceoAge = age.preview(ceo);
boolean isExperienced = age.exists(a -> a > 40, ceo);
int ageCount = age.length(ceo); // Always 1 for Getter
System.out.println("CEO Age (Optional): " + ceoAge);
System.out.println("CEO is Experienced: " + isExperienced);
System.out.println("Age Count: " + ageCount);
// === Employee Analysis ===
Fold<List<Person>, Person> listFold = Fold.of(list -> list);
List<String> employeeNames = listFold.andThen(fullName.asFold()).getAll(employees);
System.out.println("Employee Names: " + employeeNames);
List<String> employeeEmails = listFold.andThen(email.asFold()).getAll(employees);
System.out.println("Employee Emails: " + employeeEmails);
// Calculate average age
int totalAge = listFold.andThen(age.asFold())
.foldMap(sumMonoid(), Function.identity(), employees);
double avgAge = (double) totalAge / employees.size();
System.out.println("Average Employee Age: " + String.format("%.1f", avgAge));
// Check if all from UK
Getter<Address, String> countryGetter = Getter.of(Address::country);
boolean allUK = listFold.andThen(addressGetter.asFold())
.andThen(countryGetter.asFold())
.all(c -> c.equals("UK"), employees);
System.out.println("All Employees from UK: " + allUK);
}
private static Monoid<Integer> sumMonoid() {
return new Monoid<>() {
@Override public Integer empty() { return 0; }
@Override public Integer combine(Integer a, Integer b) { return a + b; }
};
}
}
Expected Output:
CEO: Jane Smith
CEO Age: 45
CEO Initials: J.S.
CEO Email: jane.smith@techcorp.com
CEO City: London
Company CEO City: London
CEO Age (Optional): Optional[45]
CEO is Experienced: true
Age Count: 1
Employee Names: [John Doe, Alice Johnson, Bob Williams]
Employee Emails: [john.doe@techcorp.com, alice.johnson@techcorp.com, bob.williams@techcorp.com]
Average Employee Age: 31.0
All Employees from UK: true
Why Getters Are Important
Getter completes the read-only optics family by providing:
- Single-element focus: Guarantees exactly one value (unlike Fold's zero-or-more)
- Composability: Chains beautifully with other optics
- Computed values: Derive data without storage overhead
- Clear intent: Explicitly read-only, preventing accidental modifications
- Type safety: Compile-time guarantees on extraction paths
- Fold inheritance: Leverages query operations (exists, all, find) for single values
By adding Getter to your optics toolkit alongside Lens, Prism, Iso, Traversal, and Fold, you have precise control over read-only access patterns. Use Getter when you need composable value extraction, Fold when you query collections, and Lens when you need both reading and writing.
The key insight: Getters make pure functions first-class composable citizens, allowing you to build sophisticated data extraction pipelines with clarity and type safety.
Previous: Indexed Optics: Position-Aware Operations Next: Setters: Composable Write-Only Modifications
Setters: A Practical Guide
Composable Write-Only Modifications
- How to modify data structures using composable, write-only optics
- Using
@GenerateSettersto create type-safe modifiers automatically - Understanding the relationship between Setter and Traversal
- Creating modification pipelines without read access
- Effectful modifications using Applicative contexts
- Factory methods:
of,fromGetSet,forList,forMapValues,identity - When to use Setter vs Lens vs Traversal
- Building batch update and normalisation pipelines
In the previous guide, we explored Getter for composable read-only access. Now we turn to its dual: Setter, a write-only optic that modifies data without necessarily reading it first.
A Setter is an optic that focuses on transforming elements within a structure. Unlike a Lens, which provides both getting and setting, a Setter concentrates solely on modification—making it ideal for batch updates, data normalisation, and transformation pipelines where read access isn't required.
The Scenario: User Management System
Consider a user management system where you need to perform various modifications:
The Data Model:
@GenerateSetters
public record User(String username, String email, int loginCount, UserSettings settings) {}
@GenerateSetters
public record UserSettings(
String theme, boolean notifications, int fontSize, Map<String, String> preferences) {}
@GenerateSetters
public record Product(String name, double price, int stock, List<String> tags) {}
@GenerateSetters
public record Inventory(List<Product> products, String warehouseId) {}
Common Modification Needs:
- "Normalise all usernames to lowercase"
- "Increment login count after authentication"
- "Apply 10% discount to all products"
- "Restock all items by 10 units"
- "Convert all product names to title case"
- "Set all user themes to dark mode"
A Setter makes these modifications type-safe, composable, and expressive.
Think of Setters Like...
- A functional modifier ✏️: Transforming values without reading
- A write-only lens 🎯: Focusing on modification only
- A batch transformer 🔄: Applying changes to multiple elements
- A data normalisation tool 📐: Standardising formats across structures
- A pipeline stage ⚙️: Composable modification steps
Setter vs Lens vs Traversal: Understanding the Differences
| Aspect | Setter | Lens | Traversal |
|---|---|---|---|
| Focus | One or more elements | Exactly one element | Zero or more elements |
| Can read? | ❌ No (typically) | ✅ Yes | ✅ Yes |
| Can modify? | ✅ Yes | ✅ Yes | ✅ Yes |
| Core operations | modify, set | get, set, modify | modifyF, getAll |
| Use case | Write-only pipelines | Read-write field access | Collection traversals |
| Intent | "Transform these values" | "Get or set this field" | "Update all these elements" |
Key Insight: A Setter can be viewed as the write-only half of a Lens. It extends Optic, enabling composition with other optics and supporting effectful modifications via modifyF. Choose Setter when you want to emphasise write-only intent or when read access isn't needed.
A Step-by-Step Walkthrough
Step 1: Creating Setters
Using @GenerateSetters Annotation
Annotating a record with @GenerateSetters creates a companion class (e.g., UserSetters) containing a Setter for each field:
import org.higherkindedj.optics.annotations.GenerateSetters;
@GenerateSetters
public record User(String username, String email, int loginCount, UserSettings settings) {}
This generates:
UserSetters.username()→Setter<User, String>UserSetters.email()→Setter<User, String>UserSetters.loginCount()→Setter<User, Integer>UserSetters.settings()→Setter<User, UserSettings>
Plus convenience methods:
UserSetters.withUsername(user, newUsername)→UserUserSetters.withEmail(user, newEmail)→User- etc.
Using Factory Methods
Create Setters programmatically:
// Using fromGetSet for single-element focus
Setter<User, String> usernameSetter = Setter.fromGetSet(
User::username,
(user, newUsername) -> new User(newUsername, user.email(), user.loginCount(), user.settings()));
// Using of for transformation-based definition
Setter<Person, String> nameSetter = Setter.of(
f -> person -> new Person(f.apply(person.name()), person.age()));
// Built-in collection setters
Setter<List<Integer>, Integer> listSetter = Setter.forList();
Setter<Map<String, Double>, Double> mapValuesSetter = Setter.forMapValues();
Step 2: Core Setter Operations
modify(function, source): Transform the Focused Value
Applies a function to modify the focused element:
Setter<User, String> usernameSetter = Setter.fromGetSet(
User::username,
(u, name) -> new User(name, u.email(), u.loginCount(), u.settings()));
User user = new User("JOHN_DOE", "john@example.com", 10, settings);
// Transform username to lowercase
User normalised = usernameSetter.modify(String::toLowerCase, user);
// Result: User("john_doe", "john@example.com", 10, settings)
// Append suffix
User suffixed = usernameSetter.modify(name -> name + "_admin", user);
// Result: User("JOHN_DOE_admin", "john@example.com", 10, settings)
set(value, source): Replace the Focused Value
Sets all focused elements to a specific value:
Setter<User, Integer> loginCountSetter = Setter.fromGetSet(
User::loginCount,
(u, count) -> new User(u.username(), u.email(), count, u.settings()));
User user = new User("john", "john@example.com", 10, settings);
User reset = loginCountSetter.set(0, user);
// Result: User("john", "john@example.com", 0, settings)
Step 3: Composing Setters
Chain Setters together for deep modifications:
Setter<User, UserSettings> settingsSetter = Setter.fromGetSet(
User::settings,
(u, s) -> new User(u.username(), u.email(), u.loginCount(), s));
Setter<UserSettings, String> themeSetter = Setter.fromGetSet(
UserSettings::theme,
(s, theme) -> new UserSettings(theme, s.notifications(), s.fontSize(), s.preferences()));
// Compose: User → UserSettings → String
Setter<User, String> userThemeSetter = settingsSetter.andThen(themeSetter);
User user = new User("john", "john@example.com", 10,
new UserSettings("light", true, 14, Map.of()));
User darkModeUser = userThemeSetter.set("dark", user);
// Result: User with settings.theme = "dark"
Deep Composition Chain
Setter<User, UserSettings> settingsSetter = /* ... */;
Setter<UserSettings, Integer> fontSizeSetter = /* ... */;
Setter<User, Integer> userFontSizeSetter = settingsSetter.andThen(fontSizeSetter);
User largerFont = userFontSizeSetter.modify(size -> size + 2, user);
// Result: User with settings.fontSize increased by 2
Step 4: Collection Setters
Higher-Kinded-J provides built-in Setters for collections:
forList(): Modify All List Elements
Setter<List<Integer>, Integer> listSetter = Setter.forList();
List<Integer> numbers = List.of(1, 2, 3, 4, 5);
// Double all values
List<Integer> doubled = listSetter.modify(x -> x * 2, numbers);
// Result: [2, 4, 6, 8, 10]
// Set all to same value
List<Integer> allZeros = listSetter.set(0, numbers);
// Result: [0, 0, 0, 0, 0]
forMapValues(): Modify All Map Values
Setter<Map<String, Integer>, Integer> mapSetter = Setter.forMapValues();
Map<String, Integer> scores = Map.of("Alice", 85, "Bob", 90, "Charlie", 78);
// Add 5 points to all scores
Map<String, Integer> curved = mapSetter.modify(score -> Math.min(100, score + 5), scores);
// Result: {Alice=90, Bob=95, Charlie=83}
// Reset all scores
Map<String, Integer> reset = mapSetter.set(0, scores);
// Result: {Alice=0, Bob=0, Charlie=0}
Step 5: Nested Collection Setters
Compose Setters for complex nested modifications:
Setter<Inventory, List<Product>> productsSetter = Setter.fromGetSet(
Inventory::products,
(inv, prods) -> new Inventory(prods, inv.warehouseId()));
Setter<List<Product>, Product> productListSetter = Setter.forList();
Setter<Product, Double> priceSetter = Setter.fromGetSet(
Product::price,
(p, price) -> new Product(p.name(), price, p.stock(), p.tags()));
// Compose: Inventory → List<Product> → Product
Setter<Inventory, Product> allProductsSetter = productsSetter.andThen(productListSetter);
Inventory inventory = new Inventory(
List.of(
new Product("Laptop", 999.99, 50, List.of("electronics")),
new Product("Keyboard", 79.99, 100, List.of("accessories")),
new Product("Monitor", 299.99, 30, List.of("displays"))),
"WH-001");
// Apply 10% discount to all products
Inventory discounted = allProductsSetter.modify(
product -> priceSetter.modify(price -> price * 0.9, product),
inventory);
// Result: All product prices reduced by 10%
// Restock all products
Setter<Product, Integer> stockSetter = Setter.fromGetSet(
Product::stock,
(p, stock) -> new Product(p.name(), p.price(), stock, p.tags()));
Inventory restocked = allProductsSetter.modify(
product -> stockSetter.modify(stock -> stock + 10, product),
inventory);
// Result: All product stock increased by 10
Step 6: Effectful Modifications
Setters support effectful modifications via modifyF, allowing you to compose modifications that might fail or have side effects:
Setter<User, String> usernameSetter = Setter.fromGetSet(
User::username,
(u, name) -> new User(name, u.email(), u.loginCount(), u.settings()));
// Validation: username must be at least 3 characters and lowercase
Function<String, Kind<OptionalKind.Witness, String>> validateUsername = username -> {
if (username.length() >= 3 && username.matches("[a-z_]+")) {
return OptionalKindHelper.OPTIONAL.widen(Optional.of(username));
} else {
return OptionalKindHelper.OPTIONAL.widen(Optional.empty());
}
};
User validUser = new User("john_doe", "john@example.com", 10, settings);
Kind<OptionalKind.Witness, User> result =
usernameSetter.modifyF(validateUsername, validUser, OptionalMonad.INSTANCE);
Optional<User> validated = OptionalKindHelper.OPTIONAL.narrow(result);
// Result: Optional[User with validated username]
User invalidUser = new User("ab", "a@test.com", 0, settings); // Too short
Kind<OptionalKind.Witness, User> invalidResult =
usernameSetter.modifyF(validateUsername, invalidUser, OptionalMonad.INSTANCE);
Optional<User> invalidValidated = OptionalKindHelper.OPTIONAL.narrow(invalidResult);
// Result: Optional.empty (validation failed)
Sequencing Effects in Collections
Setter<List<Integer>, Integer> listSetter = Setter.forList();
List<Integer> numbers = List.of(1, 2, 3);
Function<Integer, Kind<OptionalKind.Witness, Integer>> doubleIfPositive = n -> {
if (n > 0) {
return OptionalKindHelper.OPTIONAL.widen(Optional.of(n * 2));
} else {
return OptionalKindHelper.OPTIONAL.widen(Optional.empty());
}
};
Kind<OptionalKind.Witness, List<Integer>> result =
listSetter.modifyF(doubleIfPositive, numbers, OptionalMonad.INSTANCE);
Optional<List<Integer>> doubled = OptionalKindHelper.OPTIONAL.narrow(result);
// Result: Optional[[2, 4, 6]]
// With negative number (will fail)
List<Integer> withNegative = List.of(1, -2, 3);
Kind<OptionalKind.Witness, List<Integer>> failedResult =
listSetter.modifyF(doubleIfPositive, withNegative, OptionalMonad.INSTANCE);
Optional<List<Integer>> failed = OptionalKindHelper.OPTIONAL.narrow(failedResult);
// Result: Optional.empty (validation failed on -2)
Step 7: Converting to Traversal
Setters can be viewed as Traversals, enabling integration with other optics:
Setter<User, String> nameSetter = Setter.fromGetSet(
User::username,
(u, name) -> new User(name, u.email(), u.loginCount(), u.settings()));
Traversal<User, String> nameTraversal = nameSetter.asTraversal();
// Now you can use Traversal operations
Function<String, Kind<OptionalKind.Witness, String>> toUpper =
s -> OptionalKindHelper.OPTIONAL.widen(Optional.of(s.toUpperCase()));
Kind<OptionalKind.Witness, User> result =
nameTraversal.modifyF(toUpper, user, OptionalMonad.INSTANCE);
Built-in Helper Setters
identity(): Modifies the Source Itself
Setter<String, String> identitySetter = Setter.identity();
String result = identitySetter.modify(String::toUpperCase, "hello");
// Result: "HELLO"
String replaced = identitySetter.set("world", "hello");
// Result: "world"
Useful as a base case in composition or for direct value transformation.
When to Use Setter vs Other Approaches
Use Setter When:
- You need write-only access without reading
- You're building batch transformation pipelines
- You want clear modification intent in your code
- You need effectful modifications with validation
- You're performing data normalisation across structures
// Good: Batch normalisation
Setter<List<String>, String> listSetter = Setter.forList();
List<String> normalised = listSetter.modify(String::trim, rawStrings);
// Good: Composable deep modification
Setter<Company, String> employeeNamesSetter = companySetter
.andThen(employeesSetter)
.andThen(personNameSetter);
Use Lens When:
- You need both reading and writing
- You want to get and set the same field
// Use Lens when you need to read
Lens<User, String> usernameLens = Lens.of(
User::username,
(u, name) -> new User(name, u.email(), u.loginCount(), u.settings()));
String current = usernameLens.get(user); // Read
User updated = usernameLens.set("new_name", user); // Write
Use Traversal When:
- You need read operations (
getAll) on collections - You're working with optional or multiple focuses
// Use Traversal when you need to extract values too
Traversal<Order, Product> productTraversal = /* ... */;
List<Product> all = Traversals.getAll(productTraversal, order); // Read
Use Direct Mutation When:
- You're working with mutable objects (not recommended in FP)
- Performance is absolutely critical
// Direct mutation (only for mutable objects)
user.setUsername("new_name"); // Avoid in functional programming
Real-World Use Cases
Data Normalisation Pipeline
Setter<List<Product>, Product> productSetter = Setter.forList();
Setter<Product, String> nameSetter = Setter.fromGetSet(
Product::name,
(p, name) -> new Product(name, p.price(), p.stock(), p.tags()));
Function<String, String> normalise = name -> {
String trimmed = name.trim();
return trimmed.substring(0, 1).toUpperCase() +
trimmed.substring(1).toLowerCase();
};
List<Product> rawProducts = List.of(
new Product(" LAPTOP ", 999.99, 50, List.of()),
new Product("keyboard", 79.99, 100, List.of()),
new Product("MONITOR", 299.99, 30, List.of()));
List<Product> normalised = productSetter.modify(
product -> nameSetter.modify(normalise, product),
rawProducts);
// Result: [Product("Laptop", ...), Product("Keyboard", ...), Product("Monitor", ...)]
Currency Conversion
Setter<Product, Double> priceSetter = /* ... */;
double exchangeRate = 0.92; // USD to EUR
List<Product> euroProducts = productSetter.modify(
product -> priceSetter.modify(price -> price * exchangeRate, product),
usdProducts);
Batch User Updates
Setter<List<User>, User> usersSetter = Setter.forList();
Setter<User, Integer> loginCountSetter = /* ... */;
// Reset all login counts
List<User> resetUsers = usersSetter.modify(
user -> loginCountSetter.set(0, user),
users);
// Increment all login counts
List<User> incremented = usersSetter.modify(
user -> loginCountSetter.modify(count -> count + 1, user),
users);
Theme Migration
Setter<User, String> userThemeSetter = settingsSetter.andThen(themeSetter);
// Migrate all users to dark mode
List<User> darkModeUsers = usersSetter.modify(
user -> userThemeSetter.set("dark", user),
users);
Common Pitfalls
❌ Don't Use Setter.of() for Effectful Operations
// Warning: Setter.of() doesn't support modifyF properly
Setter<Person, String> nameSetter = Setter.of(
f -> person -> new Person(f.apply(person.name()), person.age()));
// This will throw UnsupportedOperationException!
nameSetter.modifyF(validateFn, person, applicative);
✅ Use fromGetSet() for Effectful Support
// Correct: fromGetSet supports modifyF
Setter<Person, String> nameSetter = Setter.fromGetSet(
Person::name,
(p, name) -> new Person(name, p.age()));
// Works correctly
nameSetter.modifyF(validateFn, person, applicative);
❌ Don't Forget Immutability
// Wrong: Modifying in place (if mutable)
setter.modify(obj -> { obj.setValue(newValue); return obj; }, source);
✅ Always Return New Instances
// Correct: Return new immutable instance
Setter<Product, Double> priceSetter = Setter.fromGetSet(
Product::price,
(p, price) -> new Product(p.name(), price, p.stock(), p.tags()));
Performance Considerations
Setters are lightweight and efficient:
- Minimal overhead: Just function composition
- No reflection: Direct method calls
- Lazy application: Modifications only applied when executed
- JIT-friendly: Can be inlined by the JVM
- O(n) collection operations:
forList()andforMapValues()are optimised to avoid quadratic time complexity
Optimised Collection Operations
The modifyF implementations in forList() and forMapValues() use efficient algorithms:
- Right-to-left folding: Uses
LinkedListwith O(1) prepending instead of repeated array copying - Single pass construction: Collects effects first, sequences them, then builds the final collection once
- Linear time complexity: O(n) for lists and maps with n elements
This means you can safely use effectful modifications on large collections without performance concerns:
// Efficient even for large lists
Setter<List<Integer>, Integer> listSetter = Setter.forList();
List<Integer> largeList = /* thousands of elements */;
// O(n) time complexity, not O(n²)
Kind<OptionalKind.Witness, List<Integer>> result =
listSetter.modifyF(validateAndTransform, largeList, OptionalMonad.INSTANCE);
Best Practice: Compose Setters at initialisation time, then reuse:
// Define once
private static final Setter<Company, Double> ALL_PRODUCT_PRICES =
companySetter.andThen(productsSetter).andThen(priceSetter);
// Reuse many times
Company discounted = ALL_PRODUCT_PRICES.modify(p -> p * 0.9, company);
Company inflated = ALL_PRODUCT_PRICES.modify(p -> p * 1.05, company);
Complete, Runnable Example
import org.higherkindedj.optics.Setter;
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.optional.OptionalKind;
import org.higherkindedj.hkt.optional.OptionalKindHelper;
import org.higherkindedj.hkt.optional.OptionalMonad;
import java.util.*;
import java.util.function.Function;
public class SetterExample {
public record User(String username, String email, int loginCount, UserSettings settings) {}
public record UserSettings(String theme, boolean notifications, int fontSize) {}
public record Product(String name, double price, int stock) {}
public static void main(String[] args) {
// === Basic Setters ===
Setter<User, String> usernameSetter = Setter.fromGetSet(
User::username,
(u, name) -> new User(name, u.email(), u.loginCount(), u.settings()));
Setter<User, Integer> loginCountSetter = Setter.fromGetSet(
User::loginCount,
(u, count) -> new User(u.username(), u.email(), count, u.settings()));
UserSettings settings = new UserSettings("light", true, 14);
User user = new User("JOHN_DOE", "john@example.com", 10, settings);
// Normalise username
User normalised = usernameSetter.modify(String::toLowerCase, user);
System.out.println("Normalised: " + normalised.username());
// Increment login count
User incremented = loginCountSetter.modify(count -> count + 1, user);
System.out.println("Login count: " + incremented.loginCount());
// === Composition ===
Setter<User, UserSettings> settingsSetter = Setter.fromGetSet(
User::settings,
(u, s) -> new User(u.username(), u.email(), u.loginCount(), s));
Setter<UserSettings, String> themeSetter = Setter.fromGetSet(
UserSettings::theme,
(s, theme) -> new UserSettings(theme, s.notifications(), s.fontSize()));
Setter<User, String> userThemeSetter = settingsSetter.andThen(themeSetter);
User darkMode = userThemeSetter.set("dark", user);
System.out.println("Theme: " + darkMode.settings().theme());
// === Collection Setters ===
Setter<List<Integer>, Integer> listSetter = Setter.forList();
List<Integer> numbers = List.of(1, 2, 3, 4, 5);
List<Integer> doubled = listSetter.modify(x -> x * 2, numbers);
System.out.println("Doubled: " + doubled);
// === Product Batch Update ===
Setter<Product, Double> priceSetter = Setter.fromGetSet(
Product::price,
(p, price) -> new Product(p.name(), price, p.stock()));
Setter<List<Product>, Product> productsSetter = Setter.forList();
List<Product> products = List.of(
new Product("Laptop", 999.99, 50),
new Product("Keyboard", 79.99, 100),
new Product("Monitor", 299.99, 30));
// Apply 10% discount
List<Product> discounted = productsSetter.modify(
product -> priceSetter.modify(price -> price * 0.9, product),
products);
System.out.println("Discounted prices:");
for (Product p : discounted) {
System.out.printf(" %s: £%.2f%n", p.name(), p.price());
}
// === Effectful Modification ===
Function<String, Kind<OptionalKind.Witness, String>> validateUsername =
username -> {
if (username.length() >= 3 && username.matches("[a-z_]+")) {
return OptionalKindHelper.OPTIONAL.widen(Optional.of(username));
} else {
return OptionalKindHelper.OPTIONAL.widen(Optional.empty());
}
};
User validUser = new User("john_doe", "john@example.com", 10, settings);
Kind<OptionalKind.Witness, User> validResult =
usernameSetter.modifyF(validateUsername, validUser, OptionalMonad.INSTANCE);
Optional<User> validated = OptionalKindHelper.OPTIONAL.narrow(validResult);
System.out.println("Valid username: " + validated.map(User::username).orElse("INVALID"));
User invalidUser = new User("ab", "a@test.com", 0, settings);
Kind<OptionalKind.Witness, User> invalidResult =
usernameSetter.modifyF(validateUsername, invalidUser, OptionalMonad.INSTANCE);
Optional<User> invalidValidated = OptionalKindHelper.OPTIONAL.narrow(invalidResult);
System.out.println("Invalid username: " + invalidValidated.map(User::username).orElse("INVALID"));
// === Data Normalisation ===
Setter<Product, String> nameSetter = Setter.fromGetSet(
Product::name,
(p, name) -> new Product(name, p.price(), p.stock()));
Function<String, String> titleCase = name -> {
String trimmed = name.trim();
return trimmed.substring(0, 1).toUpperCase() + trimmed.substring(1).toLowerCase();
};
List<Product> rawProducts = List.of(
new Product(" LAPTOP ", 999.99, 50),
new Product("keyboard", 79.99, 100),
new Product("MONITOR", 299.99, 30));
List<Product> normalisedProducts = productsSetter.modify(
product -> nameSetter.modify(titleCase, product),
rawProducts);
System.out.println("Normalised product names:");
for (Product p : normalisedProducts) {
System.out.println(" - " + p.name());
}
}
}
Expected Output:
Normalised: john_doe
Login count: 11
Theme: dark
Doubled: [2, 4, 6, 8, 10]
Discounted prices:
Laptop: £899.99
Keyboard: £71.99
Monitor: £269.99
Valid username: john_doe
Invalid username: INVALID
Normalised product names:
- Laptop
- Keyboard
- Monitor
Why Setters Are Important
Setter provides a focused, write-only approach to data modification:
- Clear intent: Explicitly write-only, preventing accidental reads
- Composability: Chains beautifully for deep, nested modifications
- Batch operations: Natural fit for updating collections
- Effectful support: Integrates with validation and error handling via Applicatives
- Type safety: Compile-time guarantees on modification paths
- Immutability-friendly: Designed for functional, immutable data structures
By adding Setter to your optics toolkit alongside Getter, Lens, Prism, Iso, Traversal, and Fold, you gain fine-grained control over both reading and writing patterns. Use Setter when you need composable write-only access, Getter for read-only extraction, and Lens when you need both.
The key insight: Setters make modifications first-class composable operations, allowing you to build sophisticated data transformation pipelines with clarity, type safety, and clear functional intent.
Previous: Getters: Composable Read-Only Access Next: Profunctor Optics: Advanced Data Transformation
At Type Class: A Practical Guide
Indexed CRUD Operations on Collections
- How to insert, update, and delete entries in indexed structures using optics
- Understanding the
Lens<S, Optional<A>>pattern for CRUD operations - Factory methods:
mapAt(),listAt(),listAtWithPadding() - Composing At with Lenses for deep access into nested collections
- Using
Prisms.some()to unwrap Optional for chained modifications - When to use At vs
Traversals.forMap()vs direct Map operations - Handling Java's Optional limitations with null values
- Building immutable configuration management systems
In previous guides, we explored Lens for accessing product fields and Traversal for operating over collections. But what happens when you need to insert a new entry into a map, delete an existing key, or update a specific list index? Standard lenses can't express these operations because they focus on values that already exist.
This is where At fills a crucial gap. It provides a Lens that focuses on the optional presence of a value at a given index—enabling full CRUD (Create, Read, Update, Delete) operations whilst maintaining immutability and composability.
The Scenario: Application Configuration Management
Consider a configuration management system where settings are stored in nested maps and lists:
The Data Model:
public record AppConfig(
String appName,
Map<String, String> settings,
Map<String, Map<String, Integer>> featureFlags,
List<String> enabledModules
) {}
public record UserPreferences(
String userId,
Map<String, String> preferences,
List<String> favouriteFeatures
) {}
public record SystemState(
AppConfig config,
Map<String, UserPreferences> userPrefs,
Map<String, Integer> metrics
) {}
Common Operations:
- "Add a new setting to the configuration"
- "Remove an outdated feature flag"
- "Update a specific user's preference"
- "Check if a metric exists before incrementing it"
- "Delete a user's preferences entirely"
Traditional optics struggle with these operations. Traversals.forMap(key) can modify existing entries but cannot insert new ones or delete them. Direct map manipulation breaks composability. At solves this elegantly.
Think of At Like...
- A key to a lockbox 🔑: You can open it (read), put something in (insert), replace the contents (update), or empty it (delete)
- An index card in a filing cabinet 📇: You can retrieve the card, file a new one, update its contents, or remove it entirely
- A dictionary entry 📖: Looking up a word gives you its definition (if present) or nothing (if absent)
- A database row accessor 🗃️: SELECT, INSERT, UPDATE, and DELETE operations on a specific key
- A nullable field lens 🎯: Focusing on presence itself, not just the value
At vs Lens vs Traversal: Understanding the Differences
| Aspect | At | Lens | Traversal |
|---|---|---|---|
| Focus | Optional presence at index | Exactly one value | Zero or more values |
| Can insert? | ✅ Yes | ❌ No | ❌ No |
| Can delete? | ✅ Yes | ❌ No | ❌ No |
| Core operation | Lens<S, Optional<A>> | get(s), set(a, s) | modifyF(f, s, app) |
| Returns | Lens to Optional | Direct value | Modified structure |
| Use case | Map/List CRUD | Product field access | Bulk modifications |
| Intent | "Manage entry at this index" | "Access this field" | "Transform all elements" |
Key Insight: At returns a Lens that focuses on Optional<A>, not A directly. This means setting Optional.empty() removes the entry, whilst setting Optional.of(value) inserts or updates it. This simple abstraction enables powerful CRUD semantics within the optics framework.
A Step-by-Step Walkthrough
Step 1: Creating At Instances
Unlike Lens which can be generated with annotations, At instances are created using factory methods from AtInstances:
import org.higherkindedj.optics.At;
import org.higherkindedj.optics.at.AtInstances;
// At instance for Map<String, Integer>
At<Map<String, Integer>, String, Integer> mapAt = AtInstances.mapAt();
// At instance for List<String>
At<List<String>, Integer, String> listAt = AtInstances.listAt();
// At instance for List with auto-padding
At<List<String>, Integer, String> paddedListAt = AtInstances.listAtWithPadding(null);
Each factory method returns an At instance parameterised by:
S– The structure type (e.g.,Map<String, Integer>)I– The index type (e.g.,Stringfor maps,Integerfor lists)A– The element type (e.g.,Integer)
Step 2: Basic CRUD Operations
Once you have an At instance, you can perform full CRUD operations:
Create / Insert
Map<String, Integer> scores = new HashMap<>();
scores.put("alice", 100);
At<Map<String, Integer>, String, Integer> mapAt = AtInstances.mapAt();
// Insert a new entry
Map<String, Integer> withBob = mapAt.insertOrUpdate("bob", 85, scores);
// Result: {alice=100, bob=85}
// Original unchanged (immutability)
System.out.println(scores); // {alice=100}
Read / Query
Optional<Integer> aliceScore = mapAt.get("alice", withBob);
// Result: Optional[100]
Optional<Integer> charlieScore = mapAt.get("charlie", withBob);
// Result: Optional.empty()
boolean hasAlice = mapAt.contains("alice", withBob);
// Result: true
Update / Modify
// Update existing value
Map<String, Integer> updatedScores = mapAt.insertOrUpdate("alice", 110, withBob);
// Result: {alice=110, bob=85}
// Modify with function
Map<String, Integer> bonusScores = mapAt.modify("bob", x -> x + 10, updatedScores);
// Result: {alice=110, bob=95}
// Modify non-existent key is a no-op
Map<String, Integer> unchanged = mapAt.modify("charlie", x -> x + 10, bonusScores);
// Result: {alice=110, bob=95} (no charlie key)
Delete / Remove
Map<String, Integer> afterRemove = mapAt.remove("alice", bonusScores);
// Result: {bob=95}
// Remove non-existent key is a no-op
Map<String, Integer> stillSame = mapAt.remove("charlie", afterRemove);
// Result: {bob=95}
Step 3: The Lens to Optional Pattern
The core of At is its at(index) method, which returns a Lens<S, Optional<A>>:
At<Map<String, Integer>, String, Integer> mapAt = AtInstances.mapAt();
Lens<Map<String, Integer>, Optional<Integer>> aliceLens = mapAt.at("alice");
Map<String, Integer> scores = new HashMap<>(Map.of("alice", 100));
// Get: Returns Optional
Optional<Integer> score = aliceLens.get(scores);
// Result: Optional[100]
// Set with Optional.of(): Insert or update
Map<String, Integer> updated = aliceLens.set(Optional.of(150), scores);
// Result: {alice=150}
// Set with Optional.empty(): Delete
Map<String, Integer> deleted = aliceLens.set(Optional.empty(), scores);
// Result: {}
This pattern is powerful because the lens composes naturally with other optics:
record Config(Map<String, String> settings) {}
Lens<Config, Map<String, String>> settingsLens =
Lens.of(Config::settings, (c, s) -> new Config(s));
At<Map<String, String>, String, String> mapAt = AtInstances.mapAt();
// Compose: Config → Map<String, String> → Optional<String>
Lens<Config, Optional<String>> debugSettingLens =
settingsLens.andThen(mapAt.at("debug"));
Config config = new Config(new HashMap<>());
// Insert new setting through composed lens
Config withDebug = debugSettingLens.set(Optional.of("true"), config);
// Result: Config[settings={debug=true}]
// Delete setting through composed lens
Config withoutDebug = debugSettingLens.set(Optional.empty(), withDebug);
// Result: Config[settings={}]
Step 4: Deep Composition with Prisms
When you need to access the actual value (not the Optional wrapper), compose with Prisms.some():
import org.higherkindedj.optics.prism.Prisms;
Lens<Config, Optional<String>> debugLens =
settingsLens.andThen(mapAt.at("debug"));
// Prism that unwraps Optional
Prism<Optional<String>, String> somePrism = Prisms.some();
// Compose into a Traversal (0-or-1 focus)
Traversal<Config, String> debugValueTraversal =
debugLens.asTraversal().andThen(somePrism.asTraversal());
Config config = new Config(new HashMap<>(Map.of("debug", "false")));
// Modify the actual string value
Config modified = Traversals.modify(debugValueTraversal, String::toUpperCase, config);
// Result: Config[settings={debug=FALSE}]
// Get all focused values (0 or 1)
List<String> values = Traversals.getAll(debugValueTraversal, config);
// Result: ["FALSE"]
// If key is absent, traversal focuses on zero elements
Config empty = new Config(new HashMap<>());
List<String> noValues = Traversals.getAll(debugValueTraversal, empty);
// Result: []
This composition creates an "affine" optic—focusing on zero or one element—which correctly models the semantics of optional map access.
List Operations: Special Considerations
At for lists has important behavioural differences from maps:
Basic List Operations
At<List<String>, Integer, String> listAt = AtInstances.listAt();
List<String> items = new ArrayList<>(List.of("apple", "banana", "cherry"));
// Read element at index
Optional<String> second = listAt.get(1, items);
// Result: Optional["banana"]
// Out of bounds returns empty
Optional<String> outOfBounds = listAt.get(10, items);
// Result: Optional.empty()
// Update element at valid index
List<String> updated = listAt.insertOrUpdate(1, "BANANA", items);
// Result: ["apple", "BANANA", "cherry"]
Deletion Shifts Indices
Important: Removing a list element shifts all subsequent indices:
List<String> afterRemove = listAt.remove(1, items);
// Result: ["apple", "cherry"]
// Note: "cherry" is now at index 1, not 2!
// Original list unchanged
System.out.println(items); // ["apple", "banana", "cherry"]
This behaviour differs from map deletion, where keys remain stable. Consider this carefully when chaining operations.
Bounds Checking
// Update at invalid index throws exception
assertThrows(IndexOutOfBoundsException.class, () ->
listAt.insertOrUpdate(10, "oops", items));
// Use listAtWithPadding for auto-expansion
At<List<String>, Integer, String> paddedAt = AtInstances.listAtWithPadding(null);
List<String> sparse = new ArrayList<>(List.of("a"));
List<String> expanded = paddedAt.insertOrUpdate(4, "e", sparse);
// Result: ["a", null, null, null, "e"]
When to Use At vs Other Approaches
✅ Use At When:
- You need CRUD semantics: Insert, update, or delete operations on indexed structures
- Composability matters: You want to chain At with Lenses for deep nested access
- Immutability is required: You need functional, side-effect-free operations
- You're building configuration systems: Dynamic settings management
- You want consistent optics patterns: Keeping your codebase uniformly functional
❌ Avoid At When:
- You only modify existing values: Use
Traversals.forMap(key)instead - You need bulk operations: Use
Traversalfor all-element modifications - Performance is critical: Direct Map operations may be faster (measure first!)
- You never delete entries: A simple Lens might suffice
Comparison with Direct Map Operations
// Direct Map manipulation (imperative)
Map<String, Integer> scores = new HashMap<>();
scores.put("alice", 100); // Mutates!
scores.remove("bob"); // Mutates!
Integer value = scores.get("alice"); // May be null
// At approach (functional)
At<Map<String, Integer>, String, Integer> at = AtInstances.mapAt();
Map<String, Integer> scores = new HashMap<>();
Map<String, Integer> with = at.insertOrUpdate("alice", 100, scores); // New map
Map<String, Integer> without = at.remove("bob", with); // New map
Optional<Integer> value = at.get("alice", without); // Safe Optional
// Original 'scores' unchanged throughout
Common Pitfalls
❌ Don't: Assume null map values are distinguishable from absent keys
Map<String, Integer> map = new HashMap<>();
map.put("nullKey", null);
At<Map<String, Integer>, String, Integer> at = AtInstances.mapAt();
Optional<Integer> result = at.get("nullKey", map);
// Result: Optional.empty() - NOT Optional.of(null)!
// Java's Optional cannot hold null values
// Optional.ofNullable(null) returns Optional.empty()
✅ Do: Avoid null values in maps, or use wrapper types
// Option 1: Use sentinel values
Map<String, Integer> map = new HashMap<>();
map.put("unset", -1); // Sentinel for "not set"
// Option 2: Use Optional as the value type
Map<String, Optional<Integer>> map = new HashMap<>();
map.put("maybeNull", Optional.empty()); // Explicitly absent
❌ Don't: Forget that list removal shifts indices
At<List<String>, Integer, String> at = AtInstances.listAt();
List<String> items = new ArrayList<>(List.of("a", "b", "c", "d"));
// Remove "b" at index 1
List<String> step1 = at.remove(1, items); // ["a", "c", "d"]
// Now try to get what was at index 3 ("d")
Optional<String> result = at.get(3, step1);
// Result: Optional.empty() - index 3 is now out of bounds!
// "d" is now at index 2
✅ Do: Recalculate indices or iterate from end
// When removing multiple elements, iterate backwards
List<String> items = new ArrayList<>(List.of("a", "b", "c", "d"));
List<Integer> indicesToRemove = List.of(1, 3); // Remove "b" and "d"
// Sort descending and remove from end
List<String> result = items;
for (int i : indicesToRemove.stream().sorted(Comparator.reverseOrder()).toList()) {
result = at.remove(i, result);
}
// Result: ["a", "c"]
❌ Don't: Ignore Optional composition when you need the actual value
Lens<Config, Optional<String>> settingLens = ...;
// This gives you Optional, not the actual value
Optional<String> optValue = settingLens.get(config);
// To modify the actual string, you need to compose with a Prism
// Otherwise you're stuck wrapping/unwrapping manually
✅ Do: Use Prisms.some() for value-level operations
Prism<Optional<String>, String> some = Prisms.some();
Traversal<Config, String> valueTraversal =
settingLens.asTraversal().andThen(some.asTraversal());
// Now you can work with the actual String
Config result = Traversals.modify(valueTraversal, String::trim, config);
Performance Considerations
HashMap Operations
mapAt() creates a new HashMap on every modification:
// Each operation copies the entire map
Map<String, Integer> step1 = at.insertOrUpdate("a", 1, map); // O(n) copy
Map<String, Integer> step2 = at.insertOrUpdate("b", 2, step1); // O(n) copy
Map<String, Integer> step3 = at.remove("c", step2); // O(n) copy
Best Practice: Batch modifications when possible:
// ❌ Multiple At operations (3 map copies)
Map<String, Integer> result = at.insertOrUpdate("a", 1,
at.insertOrUpdate("b", 2,
at.remove("c", original)));
// ✅ Single bulk operation
Map<String, Integer> result = new HashMap<>(original);
result.put("a", 1);
result.put("b", 2);
result.remove("c");
// Then use At for subsequent immutable operations
List Operations
List modifications involve array copying:
At<List<String>, Integer, String> at = AtInstances.listAt();
// Update is O(n) - copies entire list
List<String> updated = at.insertOrUpdate(0, "new", original);
// Remove is O(n) - copies and shifts
List<String> removed = at.remove(0, original);
For large lists with frequent modifications, consider alternative data structures (persistent collections, tree-based structures) or batch operations.
Real-World Example: Feature Flag Management
Consider a feature flag system where different environments have different configurations:
public class FeatureFlagManager {
private final At<Map<String, Boolean>, String, Boolean> flagAt = AtInstances.mapAt();
private Map<String, Boolean> flags;
public FeatureFlagManager(Map<String, Boolean> initialFlags) {
this.flags = new HashMap<>(initialFlags);
}
public void enableFeature(String featureName) {
flags = flagAt.insertOrUpdate(featureName, true, flags);
}
public void disableFeature(String featureName) {
flags = flagAt.insertOrUpdate(featureName, false, flags);
}
public void removeFeature(String featureName) {
flags = flagAt.remove(featureName, flags);
}
public boolean isEnabled(String featureName) {
return flagAt.get(featureName, flags).orElse(false);
}
public Map<String, Boolean> getFlags() {
return Collections.unmodifiableMap(flags);
}
}
// Usage
var manager = new FeatureFlagManager(Map.of("dark_mode", true));
manager.enableFeature("new_dashboard");
manager.disableFeature("legacy_api");
manager.removeFeature("deprecated_feature");
System.out.println(manager.isEnabled("dark_mode")); // true
System.out.println(manager.isEnabled("new_dashboard")); // true
System.out.println(manager.isEnabled("nonexistent")); // false
This pattern ensures all flag operations maintain immutability internally whilst providing a clean mutable-style API externally.
Complete, Runnable Example
Here's a comprehensive example demonstrating all major At features:
package org.higherkindedj.example.optics;
import java.util.*;
import org.higherkindedj.optics.At;
import org.higherkindedj.optics.Lens;
import org.higherkindedj.optics.Prism;
import org.higherkindedj.optics.Traversal;
import org.higherkindedj.optics.annotations.GenerateLenses;
import org.higherkindedj.optics.at.AtInstances;
import org.higherkindedj.optics.prism.Prisms;
import org.higherkindedj.optics.util.Traversals;
public class AtUsageExample {
@GenerateLenses
public record UserProfile(
String username,
Map<String, String> settings,
Map<String, Integer> scores,
List<String> tags
) {}
public static void main(String[] args) {
System.out.println("=== At Type Class Usage Examples ===\n");
// 1. Map CRUD Operations
System.out.println("--- Map CRUD Operations ---");
At<Map<String, Integer>, String, Integer> mapAt = AtInstances.mapAt();
Map<String, Integer> scores = new HashMap<>(Map.of("maths", 95, "english", 88));
System.out.println("Initial scores: " + scores);
// Insert
Map<String, Integer> withScience = mapAt.insertOrUpdate("science", 92, scores);
System.out.println("After insert 'science': " + withScience);
// Update
Map<String, Integer> updatedMaths = mapAt.insertOrUpdate("maths", 98, withScience);
System.out.println("After update 'maths': " + updatedMaths);
// Read
System.out.println("Physics score (absent): " + mapAt.get("physics", updatedMaths));
System.out.println("Maths score (present): " + mapAt.get("maths", updatedMaths));
// Delete
Map<String, Integer> afterRemove = mapAt.remove("english", updatedMaths);
System.out.println("After remove 'english': " + afterRemove);
// Modify
Map<String, Integer> bonusMaths = mapAt.modify("maths", x -> x + 5, afterRemove);
System.out.println("After modify 'maths' (+5): " + bonusMaths);
System.out.println("Original unchanged: " + scores);
System.out.println();
// 2. Lens Composition
System.out.println("--- Lens Composition with At ---");
// Use generated lenses from @GenerateLenses annotation
Lens<UserProfile, Map<String, String>> settingsLens = UserProfileLenses.settings();
At<Map<String, String>, String, String> stringMapAt = AtInstances.mapAt();
UserProfile profile = new UserProfile(
"alice",
new HashMap<>(Map.of("theme", "dark", "language", "en")),
new HashMap<>(Map.of("maths", 95)),
new ArrayList<>(List.of("developer"))
);
System.out.println("Initial profile: " + profile);
// Compose: UserProfile → Map → Optional<String>
Lens<UserProfile, Optional<String>> themeLens =
settingsLens.andThen(stringMapAt.at("theme"));
System.out.println("Current theme: " + themeLens.get(profile));
// Update through composed lens
UserProfile lightTheme = themeLens.set(Optional.of("light"), profile);
System.out.println("After setting theme: " + lightTheme.settings());
// Add new setting
Lens<UserProfile, Optional<String>> notifLens =
settingsLens.andThen(stringMapAt.at("notifications"));
UserProfile withNotif = notifLens.set(Optional.of("enabled"), lightTheme);
System.out.println("After adding notification: " + withNotif.settings());
// Remove setting
Lens<UserProfile, Optional<String>> langLens =
settingsLens.andThen(stringMapAt.at("language"));
UserProfile noLang = langLens.set(Optional.empty(), withNotif);
System.out.println("After removing language: " + noLang.settings());
System.out.println();
// 3. Deep Composition with Prism
System.out.println("--- Deep Composition: At + Prism ---");
Lens<UserProfile, Map<String, Integer>> scoresLens = UserProfileLenses.scores();
At<Map<String, Integer>, String, Integer> scoresAt = AtInstances.mapAt();
Prism<Optional<Integer>, Integer> somePrism = Prisms.some();
// Compose into Traversal (0-or-1 focus)
Lens<UserProfile, Optional<Integer>> mathsLens = scoresLens.andThen(scoresAt.at("maths"));
Traversal<UserProfile, Integer> mathsTraversal =
mathsLens.asTraversal().andThen(somePrism.asTraversal());
UserProfile bob = new UserProfile(
"bob",
new HashMap<>(),
new HashMap<>(Map.of("maths", 85, "science", 90)),
new ArrayList<>()
);
System.out.println("Bob's profile: " + bob);
// Get via traversal
List<Integer> mathsScores = Traversals.getAll(mathsTraversal, bob);
System.out.println("Maths score via traversal: " + mathsScores);
// Modify via traversal
UserProfile boostedBob = Traversals.modify(mathsTraversal, x -> x + 10, bob);
System.out.println("After boosting maths by 10: " + boostedBob.scores());
// Missing key = empty traversal
Traversal<UserProfile, Integer> historyTraversal =
scoresLens.andThen(scoresAt.at("history"))
.asTraversal().andThen(somePrism.asTraversal());
List<Integer> historyScores = Traversals.getAll(historyTraversal, bob);
System.out.println("History score (absent): " + historyScores);
System.out.println("\n=== All operations maintain immutability ===");
}
}
Expected Output:
=== At Type Class Usage Examples ===
--- Map CRUD Operations ---
Initial scores: {maths=95, english=88}
After insert 'science': {maths=95, science=92, english=88}
After update 'maths': {maths=98, science=92, english=88}
Physics score (absent): Optional.empty
Maths score (present): Optional[98]
After remove 'english': {maths=98, science=92}
After modify 'maths' (+5): {maths=103, science=92}
Original unchanged: {maths=95, english=88}
--- Lens Composition with At ---
Initial profile: UserProfile[username=alice, settings={theme=dark, language=en}, scores={maths=95}, tags=[developer]]
Current theme: Optional[dark]
After setting theme: {theme=light, language=en}
After adding notification: {theme=light, language=en, notifications=enabled}
After removing language: {theme=light, notifications=enabled}
--- Deep Composition: At + Prism ---
Bob's profile: UserProfile[username=bob, settings={}, scores={maths=85, science=90}, tags=[]]
Maths score via traversal: [85]
After boosting maths by 10: {maths=95, science=90}
History score (absent): []
=== All operations maintain immutability ===
Further Reading
- Haskell Lens Library - At Type Class
- Optics By Example - Comprehensive optics guide
- Lenses Guide - Foundation for At composition
- Prisms Guide - Unwrapping Optional with Prisms
- Traversals Guide - Bulk operations on collections
Summary
The At type class provides a powerful abstraction for indexed CRUD operations on collections:
- Lens to Optional:
at(index)returnsLens<S, Optional<A>>enabling insert/update/delete - Immutable by design: All operations return new structures
- Composable: Chains naturally with other optics for deep access
- Type-safe: Leverages Java's type system for safety
At bridges the gap between pure functional optics and practical collection manipulation, enabling you to build robust, immutable data pipelines that handle the full lifecycle of indexed data.
Related: For read/update-only operations without insert/delete semantics, see the Ixed Type Class guide, which provides safe partial access via Traversal<S, A>.
Previous: Setters | Next: Ixed Type Class
Ixed Type Class: Safe Indexed Access
Zero-or-One Element Traversals
- How to safely access and update existing elements in indexed structures
- Understanding the
Traversal<S, A>pattern for partial access - Factory methods:
mapIx(),listIx(),fromAt() - The key difference between Ixed (read/update only) and At (full CRUD)
- Composing Ixed with Lenses for safe deep access into nested collections
- How Ixed is built on At internally using
Prisms.some() - When to use Ixed vs At vs direct collection operations
- Building safe, structure-preserving data pipelines
In the previous guide on At, we explored how to perform full CRUD operations on indexed structures—inserting new entries, deleting existing ones, and updating values. But what if you only need to read and update elements that already exist, without the ability to insert or delete? What if you want operations that automatically become no-ops when an index is absent, preserving the structure unchanged?
This is where Ixed shines. It provides a Traversal that focuses on zero or one element at a given index—perfect for safe, partial access patterns where you want to modify existing data without changing the structure's shape.
The Scenario: Safe Configuration Reading
Consider a configuration system where you need to read and update existing settings, but deliberately want to avoid accidentally creating new entries:
The Data Model:
public record ServerConfig(
String serverName,
Map<String, String> environment,
Map<String, Integer> ports,
List<String> allowedHosts
) {}
public record DatabaseConfig(
String connectionString,
Map<String, String> poolSettings,
List<String> replicaHosts
) {}
public record ApplicationSettings(
ServerConfig server,
DatabaseConfig database,
Map<String, String> featureToggles
) {}
Common Operations:
- "Read the current database pool size setting"
- "Update the max connections if it exists"
- "Modify the port number for an existing service"
- "Safely access the nth replica host if it exists"
- "Never accidentally create new configuration keys"
The key requirement here is safety: you want to interact with existing data without risk of accidentally expanding the structure. If a key doesn't exist, the operation should simply do nothing rather than insert a new entry.
Think of Ixed Like...
- A read-only database view with UPDATE privileges 🔍: You can SELECT and UPDATE existing rows, but cannot INSERT new ones or DELETE existing ones
- A safe array accessor 🛡️: Returns nothing for out-of-bounds indices instead of throwing exceptions
- A peephole in a door 👁️: You can see what's there and modify it if present, but you can't add or remove anything
- A library card catalogue lookup 📚: You can find and update existing book entries, but adding new books requires different permissions
- A partial function 🎯: Operates only where defined, silently ignores undefined inputs
Ixed vs At vs Traversal vs Prism: Understanding the Relationships
| Aspect | Ixed | At | Traversal | Prism |
|---|---|---|---|---|
| Focus | Zero or one element at index | Optional presence at index | Zero or more elements | Zero or one variant case |
| Can insert? | ❌ No | ✅ Yes | ❌ No | ✅ Yes (via build) |
| Can delete? | ❌ No | ✅ Yes | ❌ No | ❌ No |
| Core operation | Traversal<S, A> | Lens<S, Optional<A>> | modifyF(f, s, app) | getOptional, build |
| Returns | Traversal (0-or-1 focus) | Lens to Optional | Modified structure | Optional value |
| Use case | Safe partial access | Map/List CRUD | Bulk modifications | Sum type handling |
| Intent | "Access if exists" | "Manage entry at index" | "Transform all elements" | "Match this case" |
| Structure change | ❌ Never changes | ✅ Can change size | ❌ Preserves count | ✅ Can change type |
Key Insight: Ixed is actually built on top of At. Internally, it composes At.at(index) with Prisms.some() to unwrap the Optional layer. This means Ixed inherits the precise boundary behaviour of At whilst removing the ability to insert or delete entries.
// Ixed is conceptually:
// at.at(index).asTraversal().andThen(Prisms.some().asTraversal())
// At gives: Lens<S, Optional<A>>
// Prisms.some() gives: Prism<Optional<A>, A>
// Composed: Traversal<S, A> focusing on 0-or-1 elements
A Step-by-Step Walkthrough
Step 1: Creating Ixed Instances
Like At, Ixed instances are created using factory methods from IxedInstances:
import org.higherkindedj.optics.Ixed;
import org.higherkindedj.optics.ixed.IxedInstances;
// Ixed instance for Map<String, Integer>
Ixed<Map<String, Integer>, String, Integer> mapIx = IxedInstances.mapIx();
// Ixed instance for List<String>
Ixed<List<String>, Integer, String> listIx = IxedInstances.listIx();
// Create Ixed from any At instance
At<Map<String, String>, String, String> customAt = AtInstances.mapAt();
Ixed<Map<String, String>, String, String> customIx = IxedInstances.fromAt(customAt);
Each factory method returns an Ixed instance parameterised by:
S– The structure type (e.g.,Map<String, Integer>)I– The index type (e.g.,Stringfor maps,Integerfor lists)A– The element type (e.g.,Integer)
Step 2: Safe Read Operations
Ixed provides safe reading that returns Optional.empty() for missing indices:
Map<String, Integer> ports = new HashMap<>();
ports.put("http", 8080);
ports.put("https", 8443);
Ixed<Map<String, Integer>, String, Integer> mapIx = IxedInstances.mapIx();
// Read existing key
Optional<Integer> httpPort = IxedInstances.get(mapIx, "http", ports);
// Result: Optional[8080]
// Read missing key - no exception, just empty
Optional<Integer> ftpPort = IxedInstances.get(mapIx, "ftp", ports);
// Result: Optional.empty()
// Check existence
boolean hasHttp = IxedInstances.contains(mapIx, "http", ports);
// Result: true
boolean hasFtp = IxedInstances.contains(mapIx, "ftp", ports);
// Result: false
Compare this to direct map access which might return null or require explicit containment checks.
Step 3: Update Operations (No Insertion!)
The crucial difference from At is that update only modifies existing entries:
Map<String, Integer> ports = new HashMap<>();
ports.put("http", 8080);
ports.put("https", 8443);
Ixed<Map<String, Integer>, String, Integer> mapIx = IxedInstances.mapIx();
// Update existing key - works as expected
Map<String, Integer> updatedPorts = IxedInstances.update(mapIx, "http", 9000, ports);
// Result: {http=9000, https=8443}
// Attempt to "update" non-existent key - NO-OP!
Map<String, Integer> samePorts = IxedInstances.update(mapIx, "ftp", 21, ports);
// Result: {http=8080, https=8443} - NO ftp key added!
// Original unchanged (immutability)
System.out.println(ports); // {http=8080, https=8443}
This is the defining characteristic of Ixed: it will never change the structure's shape. If an index doesn't exist, operations silently become no-ops.
Step 4: Functional Modification
Apply functions to existing elements only:
Map<String, Integer> scores = new HashMap<>();
scores.put("alice", 100);
scores.put("bob", 85);
Ixed<Map<String, Integer>, String, Integer> mapIx = IxedInstances.mapIx();
// Modify existing value
Map<String, Integer> bonusAlice = IxedInstances.modify(mapIx, "alice", x -> x + 10, scores);
// Result: {alice=110, bob=85}
// Modify non-existent key - no-op
Map<String, Integer> unchanged = IxedInstances.modify(mapIx, "charlie", x -> x + 10, scores);
// Result: {alice=100, bob=85} - no charlie key created
This pattern is excellent for operations like "increment if exists" or "apply transformation to known entries".
Step 5: Composition with Other Optics
Ixed composes naturally with Lenses for deep, safe access:
record Config(Map<String, Integer> settings) {}
Lens<Config, Map<String, Integer>> settingsLens =
Lens.of(Config::settings, (c, s) -> new Config(s));
Ixed<Map<String, Integer>, String, Integer> mapIx = IxedInstances.mapIx();
// Compose: Config → Map<String, Integer> → Integer (0-or-1)
Traversal<Config, Integer> maxConnectionsTraversal =
settingsLens.asTraversal().andThen(mapIx.ix("maxConnections"));
Config config = new Config(new HashMap<>(Map.of("maxConnections", 100, "timeout", 30)));
// Safe access through composed traversal
List<Integer> values = Traversals.getAll(maxConnectionsTraversal, config);
// Result: [100]
// Safe modification through composed traversal
Config updated = Traversals.modify(maxConnectionsTraversal, x -> x * 2, config);
// Result: Config[settings={maxConnections=200, timeout=30}]
// Missing key = empty focus, modification is no-op
Traversal<Config, Integer> missingTraversal =
settingsLens.asTraversal().andThen(mapIx.ix("nonexistent"));
Config unchanged = Traversals.modify(missingTraversal, x -> x + 1, config);
// Result: Config unchanged, no "nonexistent" key added
This composition gives you type-safe, deep access that automatically handles missing intermediate keys.
List Operations: Safe Indexed Access
Ixed for lists provides bounds-safe operations:
Safe Element Access
Ixed<List<String>, Integer, String> listIx = IxedInstances.listIx();
List<String> items = new ArrayList<>(List.of("apple", "banana", "cherry"));
// Access valid index
Optional<String> second = IxedInstances.get(listIx, 1, items);
// Result: Optional["banana"]
// Access out-of-bounds - no exception!
Optional<String> tenth = IxedInstances.get(listIx, 10, items);
// Result: Optional.empty()
// Negative index - safely handled
Optional<String> negative = IxedInstances.get(listIx, -1, items);
// Result: Optional.empty()
Safe Element Update
// Update existing index
List<String> updated = IxedInstances.update(listIx, 1, "BANANA", items);
// Result: ["apple", "BANANA", "cherry"]
// Update out-of-bounds - no-op, no exception
List<String> unchanged = IxedInstances.update(listIx, 10, "grape", items);
// Result: ["apple", "banana", "cherry"] - no element added!
// Modify with function
List<String> uppercased = IxedInstances.modify(listIx, 0, String::toUpperCase, items);
// Result: ["APPLE", "banana", "cherry"]
// Original always unchanged
System.out.println(items); // ["apple", "banana", "cherry"]
Important Contrast with At: At.insertOrUpdate() on a list will throw IndexOutOfBoundsException for invalid indices (unless using padding), whilst IxedInstances.update() simply returns the list unchanged.
When to Use Ixed vs Other Approaches
✅ Use Ixed When:
- You want safe partial access: Operations that become no-ops on missing indices
- Structure preservation is critical: You must not accidentally add or remove entries
- You're reading configuration files: Accessing known keys without creating defaults
- You need composable traversals: Building deep access paths that handle missing intermediates
- You want to avoid exceptions: Out-of-bounds list access should be safe, not throw
- Immutability matters: All operations return new structures
❌ Avoid Ixed When:
- You need to insert new entries: Use
At.insertOrUpdate()instead - You need to delete entries: Use
At.remove()instead - You want to set defaults for missing keys: Use
AtwithOptional.of(defaultValue) - You need bulk operations: Use
Traversalfor all-element modifications - Performance is critical: Direct collection access may be faster (measure first!)
Ixed vs At: Choosing the Right Tool
// Scenario: Update user's email if they exist, do nothing if they don't
Map<String, String> users = new HashMap<>(Map.of("alice", "alice@example.com"));
// With At - DANGER: Might accidentally create user!
At<Map<String, String>, String, String> at = AtInstances.mapAt();
Map<String, String> result1 = at.insertOrUpdate("bob", "bob@example.com", users);
// Result: {alice=alice@example.com, bob=bob@example.com} - Bob added!
// With Ixed - SAFE: Will not create if missing
Ixed<Map<String, String>, String, String> ix = IxedInstances.mapIx();
Map<String, String> result2 = IxedInstances.update(ix, "bob", "bob@example.com", users);
// Result: {alice=alice@example.com} - No Bob, as intended!
Use At when you explicitly want CRUD semantics; use Ixed when you want read/update-only with automatic no-ops for missing indices.
Common Pitfalls
❌ Don't: Expect Ixed to insert new entries
Ixed<Map<String, Integer>, String, Integer> mapIx = IxedInstances.mapIx();
Map<String, Integer> empty = new HashMap<>();
Map<String, Integer> result = IxedInstances.update(mapIx, "key", 100, empty);
// Result: {} - STILL EMPTY! No insertion occurred.
// If you need insertion, use At instead:
At<Map<String, Integer>, String, Integer> mapAt = AtInstances.mapAt();
Map<String, Integer> withKey = mapAt.insertOrUpdate("key", 100, empty);
// Result: {key=100}
✅ Do: Use Ixed for safe, non-inserting updates
// Perfect for updating known configuration keys
Map<String, String> config = new HashMap<>(Map.of("theme", "dark", "language", "en"));
Ixed<Map<String, String>, String, String> ix = IxedInstances.mapIx();
// Only updates keys that exist
Map<String, String> updated = IxedInstances.update(ix, "theme", "light", config);
// Result: {theme=light, language=en}
// Typo in key name? No problem - just a no-op
Map<String, String> unchanged = IxedInstances.update(ix, "tehme", "light", config); // typo!
// Result: {theme=dark, language=en} - no new key created
❌ Don't: Assume update failure means an error
Ixed<List<String>, Integer, String> listIx = IxedInstances.listIx();
List<String> items = new ArrayList<>(List.of("a", "b", "c"));
List<String> result = IxedInstances.update(listIx, 10, "z", items);
// Result is the same list - but this is SUCCESS, not failure!
// The operation correctly did nothing because index 10 doesn't exist.
✅ Do: Check for existence first if you need to know
Ixed<Map<String, Integer>, String, Integer> mapIx = IxedInstances.mapIx();
Map<String, Integer> scores = new HashMap<>(Map.of("alice", 100));
// If you need to know whether update will succeed:
if (IxedInstances.contains(mapIx, "bob", scores)) {
Map<String, Integer> updated = IxedInstances.update(mapIx, "bob", 90, scores);
System.out.println("Updated Bob's score");
} else {
System.out.println("Bob not found - consider using At to insert");
}
❌ Don't: Forget that Ixed inherits At's null value limitations
Map<String, Integer> map = new HashMap<>();
map.put("nullValue", null);
Ixed<Map<String, Integer>, String, Integer> mapIx = IxedInstances.mapIx();
Optional<Integer> result = IxedInstances.get(mapIx, "nullValue", map);
// Result: Optional.empty() - NOT Optional.of(null)!
// Java's Optional cannot hold null values
// This is inherited from the underlying At implementation
✅ Do: Avoid null values in collections
// Use sentinel values or wrapper types if you need to distinguish null from absent
Map<String, Optional<Integer>> map = new HashMap<>();
map.put("maybeNull", Optional.empty()); // Explicitly absent value
map.put("hasValue", Optional.of(42)); // Present value
// Or use At directly if you need to distinguish presence from null
At<Map<String, Integer>, String, Integer> at = AtInstances.mapAt();
// at.contains("key", map) checks key presence, not value
Performance Considerations
Immutability Overhead
Like At, all Ixed operations create new collection instances:
Ixed<Map<String, Integer>, String, Integer> mapIx = IxedInstances.mapIx();
// Each operation creates a new HashMap copy - O(n)
Map<String, Integer> step1 = IxedInstances.update(mapIx, "a", 1, original); // Copy
Map<String, Integer> step2 = IxedInstances.modify(mapIx, "b", x -> x + 1, step1); // Copy
Map<String, Integer> step3 = IxedInstances.update(mapIx, "c", 3, step2); // Copy
Best Practice: Batch modifications when possible, or accept the immutability overhead for correctness:
// For multiple updates, consider direct immutable construction
Map<String, Integer> result = new HashMap<>(original);
result.put("a", 1); // Mutable during construction
result.compute("b", (k, v) -> v != null ? v + 1 : v);
result.put("c", 3);
// Now use Ixed for subsequent safe operations
Composition Overhead
Composed traversals have minimal overhead since they're just function compositions:
// Composition is cheap - just wraps functions
Traversal<Config, Integer> deep = lens.asTraversal().andThen(mapIx.ix("key"));
// The overhead is in the actual modification, not the composition
Config result = Traversals.modify(deep, x -> x + 1, config);
Compared to At
Ixed has essentially the same performance characteristics as At since it's built on top of it. The additional Prism composition adds negligible overhead.
Real-World Example: Safe Feature Toggle Reader
Consider a system where feature toggles are read from external configuration but should never be accidentally created:
public class SafeFeatureReader {
private final Ixed<Map<String, Boolean>, String, Boolean> featureIx =
IxedInstances.mapIx();
private final Map<String, Boolean> features;
public SafeFeatureReader(Map<String, Boolean> initialFeatures) {
// Create immutable snapshot
this.features = new HashMap<>(initialFeatures);
}
public boolean isEnabled(String featureName) {
// Safe read - returns false for unknown features
return IxedInstances.get(featureIx, featureName, features).orElse(false);
}
public boolean isKnownFeature(String featureName) {
return IxedInstances.contains(featureIx, featureName, features);
}
public Map<String, Boolean> withFeatureUpdated(String featureName, boolean enabled) {
// Safe update - only modifies existing features
// Will NOT add new features even if called with unknown name
return IxedInstances.update(featureIx, featureName, enabled, features);
}
public Map<String, Boolean> withFeatureToggled(String featureName) {
// Safe toggle - flips value if exists, no-op if missing
return IxedInstances.modify(featureIx, featureName, current -> !current, features);
}
public Set<String> getKnownFeatures() {
return Collections.unmodifiableSet(features.keySet());
}
}
// Usage
Map<String, Boolean> config = Map.of(
"dark_mode", true,
"new_dashboard", false,
"beta_features", true
);
SafeFeatureReader reader = new SafeFeatureReader(config);
// Safe reads
System.out.println(reader.isEnabled("dark_mode")); // true
System.out.println(reader.isEnabled("unknown")); // false (default)
System.out.println(reader.isKnownFeature("unknown")); // false
// Safe updates - won't create new features
Map<String, Boolean> updated = reader.withFeatureUpdated("new_dashboard", true);
// Result: {dark_mode=true, new_dashboard=true, beta_features=true}
Map<String, Boolean> unchanged = reader.withFeatureUpdated("typo_feature", true);
// Result: {dark_mode=true, new_dashboard=false, beta_features=true}
// No "typo_feature" added!
// Safe toggle
Map<String, Boolean> toggled = reader.withFeatureToggled("beta_features");
// Result: {dark_mode=true, new_dashboard=false, beta_features=false}
This pattern ensures configuration integrity—you can never accidentally pollute your feature flags with typos or unknown keys.
Complete, Runnable Example
Here's a comprehensive example demonstrating all major Ixed features:
package org.higherkindedj.example.optics;
import java.util.*;
import org.higherkindedj.optics.At;
import org.higherkindedj.optics.Ixed;
import org.higherkindedj.optics.Lens;
import org.higherkindedj.optics.Traversal;
import org.higherkindedj.optics.annotations.GenerateLenses;
import org.higherkindedj.optics.at.AtInstances;
import org.higherkindedj.optics.ixed.IxedInstances;
import org.higherkindedj.optics.util.Traversals;
public class IxedUsageExample {
@GenerateLenses
public record ServerConfig(
String name,
Map<String, Integer> ports,
Map<String, String> environment,
List<String> allowedOrigins
) {}
public static void main(String[] args) {
System.out.println("=== Ixed Type Class Usage Examples ===\n");
// 1. Basic Map Operations - Safe Access Only
System.out.println("--- Map Safe Access (No Insertion) ---");
Ixed<Map<String, Integer>, String, Integer> mapIx = IxedInstances.mapIx();
Map<String, Integer> ports = new HashMap<>(Map.of("http", 8080, "https", 8443));
System.out.println("Initial ports: " + ports);
// Safe read
System.out.println("HTTP port: " + IxedInstances.get(mapIx, "http", ports));
System.out.println("FTP port (missing): " + IxedInstances.get(mapIx, "ftp", ports));
// Safe update - only existing keys
Map<String, Integer> updatedPorts = IxedInstances.update(mapIx, "http", 9000, ports);
System.out.println("After update 'http': " + updatedPorts);
// Attempted update of non-existent key - NO-OP!
Map<String, Integer> samePorts = IxedInstances.update(mapIx, "ftp", 21, ports);
System.out.println("After 'update' non-existent 'ftp': " + samePorts);
System.out.println("FTP was NOT added (Ixed doesn't insert)");
// Modify with function
Map<String, Integer> doubled = IxedInstances.modify(mapIx, "https", x -> x * 2, ports);
System.out.println("After doubling 'https': " + doubled);
System.out.println("Original unchanged: " + ports);
System.out.println();
// 2. Contrast with At (CRUD)
System.out.println("--- Ixed vs At: Insertion Behaviour ---");
At<Map<String, Integer>, String, Integer> mapAt = AtInstances.mapAt();
Map<String, Integer> empty = new HashMap<>();
// At CAN insert
Map<String, Integer> withNew = mapAt.insertOrUpdate("newKey", 42, empty);
System.out.println("At.insertOrUpdate on empty map: " + withNew);
// Ixed CANNOT insert
Map<String, Integer> stillEmpty = IxedInstances.update(mapIx, "newKey", 42, empty);
System.out.println("Ixed.update on empty map: " + stillEmpty);
System.out.println("Ixed preserves structure - no insertion occurred");
System.out.println();
// 3. List Safe Indexed Access
System.out.println("--- List Safe Indexed Access ---");
Ixed<List<String>, Integer, String> listIx = IxedInstances.listIx();
List<String> origins = new ArrayList<>(List.of("localhost", "example.com", "api.example.com"));
System.out.println("Initial origins: " + origins);
// Safe bounds checking
System.out.println("Index 1: " + IxedInstances.get(listIx, 1, origins));
System.out.println("Index 10 (out of bounds): " + IxedInstances.get(listIx, 10, origins));
System.out.println("Index -1 (negative): " + IxedInstances.get(listIx, -1, origins));
// Safe update within bounds
List<String> updated = IxedInstances.update(listIx, 1, "www.example.com", origins);
System.out.println("After update index 1: " + updated);
// Update out of bounds - no-op, no exception!
List<String> unchanged = IxedInstances.update(listIx, 10, "invalid.com", origins);
System.out.println("After 'update' out-of-bounds index 10: " + unchanged);
System.out.println("No exception thrown, list unchanged");
// Functional modification
List<String> uppercased = IxedInstances.modify(listIx, 0, String::toUpperCase, origins);
System.out.println("After uppercase index 0: " + uppercased);
System.out.println("Original unchanged: " + origins);
System.out.println();
// 4. Composition with Lenses
System.out.println("--- Deep Composition: Lens + Ixed ---");
// Use generated lenses from @GenerateLenses annotation
Lens<ServerConfig, Map<String, Integer>> portsLens = ServerConfigLenses.ports();
Lens<ServerConfig, Map<String, String>> envLens = ServerConfigLenses.environment();
ServerConfig config = new ServerConfig(
"production",
new HashMap<>(Map.of("http", 8080, "https", 8443, "ws", 8765)),
new HashMap<>(Map.of("NODE_ENV", "production", "LOG_LEVEL", "info")),
new ArrayList<>(List.of("*.example.com"))
);
System.out.println("Initial config: " + config);
// Compose: ServerConfig → Map<String, Integer> → Integer (0-or-1)
Ixed<Map<String, Integer>, String, Integer> portIx = IxedInstances.mapIx();
Traversal<ServerConfig, Integer> httpPortTraversal =
portsLens.asTraversal().andThen(portIx.ix("http"));
// Safe access through composition
List<Integer> httpPorts = Traversals.getAll(httpPortTraversal, config);
System.out.println("HTTP port via traversal: " + httpPorts);
// Safe modification through composition
ServerConfig updatedConfig = Traversals.modify(httpPortTraversal, p -> p + 1000, config);
System.out.println("After incrementing HTTP port: " + updatedConfig.ports());
// Non-existent key = empty focus
Traversal<ServerConfig, Integer> ftpPortTraversal =
portsLens.asTraversal().andThen(portIx.ix("ftp"));
List<Integer> ftpPorts = Traversals.getAll(ftpPortTraversal, config);
System.out.println("FTP port (missing): " + ftpPorts);
ServerConfig stillSameConfig = Traversals.modify(ftpPortTraversal, p -> p + 1, config);
System.out.println("After 'modify' missing FTP: " + stillSameConfig.ports());
System.out.println("Config unchanged - Ixed didn't insert FTP");
System.out.println();
// 5. Checking existence
System.out.println("--- Existence Checking ---");
System.out.println("Contains 'http': " + IxedInstances.contains(portIx, "http", config.ports()));
System.out.println("Contains 'ftp': " + IxedInstances.contains(portIx, "ftp", config.ports()));
// Pattern: Check before deciding on operation
String keyToUpdate = "ws";
if (IxedInstances.contains(portIx, keyToUpdate, config.ports())) {
Map<String, Integer> newPorts = IxedInstances.update(portIx, keyToUpdate, 9999, config.ports());
System.out.println("WebSocket port updated to 9999: " + newPorts);
} else {
System.out.println(keyToUpdate + " not found - would need At to insert");
}
System.out.println();
// 6. Building Ixed from At
System.out.println("--- Creating Ixed from At ---");
At<Map<String, String>, String, String> stringMapAt = AtInstances.mapAt();
Ixed<Map<String, String>, String, String> envIx = IxedInstances.fromAt(stringMapAt);
Map<String, String> env = config.environment();
System.out.println("Initial environment: " + env);
// Use derived Ixed for safe operations
Map<String, String> updatedEnv = IxedInstances.update(envIx, "LOG_LEVEL", "debug", env);
System.out.println("After update LOG_LEVEL: " + updatedEnv);
Map<String, String> unchangedEnv = IxedInstances.update(envIx, "NEW_VAR", "value", env);
System.out.println("After 'update' non-existent NEW_VAR: " + unchangedEnv);
System.out.println("NEW_VAR not added - Ixed from At still can't insert");
System.out.println("\n=== All operations maintain immutability and structure ===");
}
}
Expected Output:
=== Ixed Type Class Usage Examples ===
--- Map Safe Access (No Insertion) ---
Initial ports: {http=8080, https=8443}
HTTP port: Optional[8080]
FTP port (missing): Optional.empty
After update 'http': {http=9000, https=8443}
After 'update' non-existent 'ftp': {http=8080, https=8443}
FTP was NOT added (Ixed doesn't insert)
After doubling 'https': {http=8080, https=16886}
Original unchanged: {http=8080, https=8443}
--- Ixed vs At: Insertion Behaviour ---
At.insertOrUpdate on empty map: {newKey=42}
Ixed.update on empty map: {}
Ixed preserves structure - no insertion occurred
--- List Safe Indexed Access ---
Initial origins: [localhost, example.com, api.example.com]
Index 1: Optional[example.com]
Index 10 (out of bounds): Optional.empty
Index -1 (negative): Optional.empty
After update index 1: [localhost, www.example.com, api.example.com]
After 'update' out-of-bounds index 10: [localhost, example.com, api.example.com]
No exception thrown, list unchanged
After uppercase index 0: [LOCALHOST, example.com, api.example.com]
Original unchanged: [localhost, example.com, api.example.com]
--- Deep Composition: Lens + Ixed ---
Initial config: ServerConfig[name=production, ports={http=8080, https=8443, ws=8765}, environment={NODE_ENV=production, LOG_LEVEL=info}, allowedOrigins=[*.example.com]]
HTTP port via traversal: [8080]
After incrementing HTTP port: {http=9080, https=8443, ws=8765}
FTP port (missing): []
After 'modify' missing FTP: {http=8080, https=8443, ws=8765}
Config unchanged - Ixed didn't insert FTP
--- Existence Checking ---
Contains 'http': true
Contains 'ftp': false
WebSocket port updated to 9999: {http=8080, https=8443, ws=9999}
--- Creating Ixed from At ---
Initial environment: {NODE_ENV=production, LOG_LEVEL=info}
After update LOG_LEVEL: {NODE_ENV=production, LOG_LEVEL=debug}
After 'update' non-existent NEW_VAR: {NODE_ENV=production, LOG_LEVEL=info}
NEW_VAR not added - Ixed from At still can't insert
=== All operations maintain immutability and structure ===
Further Reading
- Haskell Lens Library - Ixed Type Class
- Optics By Example - Comprehensive optics guide
- At Type Class Guide - Full CRUD operations with insert/delete
- Traversals Guide - Bulk operations on collections
- Prisms Guide - Understanding the
some()Prism used internally
Summary
The Ixed type class provides a powerful abstraction for safe, partial access to indexed structures:
- Traversal to existing elements:
ix(index)returnsTraversal<S, A>focusing on 0-or-1 elements - No insertion or deletion: Operations become no-ops for missing indices
- Structure preservation: The shape of your data never changes unexpectedly
- Built on At: Inherits precise semantics whilst removing CRUD mutations
- Composable: Chains naturally with other optics for safe deep access
- Exception-free: Out-of-bounds access returns empty, doesn't throw
Ixed complements At by providing read/update-only semantics when you need safe partial access without the risk of accidentally modifying your data structure's shape. Use Ixed when correctness and structure preservation matter more than the ability to insert or delete entries.
Previous: At Type Class | Next: Profunctor Optics
Profunctor Optics: Advanced Data Transformation
Adapting Optics to Different Data Types
- How to adapt existing optics to work with different data types
- Using
contramapto change source types andmapto change target types - Combining both adaptations with
dimapfor complete format conversion - Creating reusable adapter patterns for API integration
- Working with type-safe wrapper classes and legacy system integration
- When to use profunctor adaptations vs creating new optics from scratch
In the previous optics guides, we explored how to work with data structures directly using Lens, Prism, Iso, and Traversal. But what happens when you need to use an optic designed for one data type with a completely different data structure? What if you want to adapt an existing optic to work with new input or output formats?
This is where the profunctor nature of optics becomes invaluable. Every optic in higher-kinded-j is fundamentally a profunctor, which means it can be adapted to work with different source and target types using powerful transformation operations.
The Challenge: Type Mismatch in Real Systems
In real-world applications, you frequently encounter situations where:
- Legacy Integration: You have optics designed for old data structures but need to work with new ones
- API Adaptation: External APIs use different field names or data formats than your internal models
- Type Safety: You want to work with strongly-typed wrapper classes but reuse optics designed for raw values
- Data Migration: You're transitioning between data formats and need optics that work with both
Consider this scenario: you have a well-tested Lens that operates on a Person record, but you need to use it with an Employee record that contains a Person as a nested field. Rather than rewriting the lens, you can adapt it.
Think of Profunctor Adaptations Like...
- Universal adapters: Like electrical plug adapters that make devices work in different countries
- Translation layers: Converting between different "languages" of data representation
- Lens filters: Modifying what the optic sees (input) and what it produces (output)
- Pipeline adapters: Connecting optics that weren't originally designed to work together
The Three Profunctor Operations
Every optic provides three powerful adaptation methods that mirror the core profunctor operations:
1. contramap: Adapting the Source Type
The contramap operation allows you to adapt an optic to work with a different source type by providing a function that converts from the new source to the original source.
Use Case: You have a Lens<Person, String> for getting a person's first name, but you want to use it with Employee objects.
// Original lens: Person -> String (first name)
Lens<Person, String> firstNameLens = PersonLenses.firstName();
// Adapt it to work with Employee by providing the conversion
Lens<Employee, String> employeeFirstNameLens =
firstNameLens.contramap(employee -> employee.personalInfo());
// Now you can use the adapted lens directly on Employee objects
Employee employee = new Employee(123, new Person("Alice", "Johnson", ...), "Engineering");
String firstName = employeeFirstNameLens.get(employee); // "Alice"
2. map: Adapting the Target Type
The map operation adapts an optic to work with a different target type by providing a function that converts from the original target to the new target.
Use Case: You have a Lens<Person, LocalDate> for birth dates, but you want to work with formatted strings instead.
// Original lens: Person -> LocalDate
Lens<Person, LocalDate> birthDateLens = PersonLenses.birthDate();
// Adapt it to work with formatted strings
Lens<Person, String> birthDateStringLens =
birthDateLens.map(date -> date.format(DateTimeFormatter.ISO_LOCAL_DATE));
// The adapted lens now returns strings
Person person = new Person("Bob", "Smith", LocalDate.of(1985, 12, 25), ...);
String dateString = birthDateStringLens.get(person); // "1985-12-25"
3. dimap: Adapting Both Source and Target Types
The dimap operation is the most powerful—it adapts both the source and target types simultaneously. This is perfect for converting between completely different data representations.
Use Case: You have optics designed for internal Person objects but need to work with external PersonDto objects that use different field structures.
// Original traversal: Person -> String (hobbies)
Traversal<Person, String> hobbiesTraversal = PersonTraversals.hobbies();
// Adapt it to work with PersonDto (different source) and call them "interests" (different context)
Traversal<PersonDto, String> interestsTraversal =
hobbiesTraversal.dimap(
// Convert PersonDto to Person
dto -> new Person(
dto.fullName().split(" ")[0],
dto.fullName().split(" ")[1],
LocalDate.parse(dto.birthDateString()),
dto.interests()
),
// Convert Person back to PersonDto
person -> new PersonDto(
person.firstName() + " " + person.lastName(),
person.birthDate().format(DateTimeFormatter.ISO_LOCAL_DATE),
person.hobbies()
)
);
Decision Guide: When to Use Each Operation
Use contramap When:
- Different source type, same target - Existing optic works perfectly, just need different input
- Extracting nested data - Your new type contains the old type as a field
- Wrapper type handling - Working with strongly-typed wrappers around base types
java
// Perfect for extracting nested data
Lens<Order, String> customerNameLens =
OrderLenses.customer().contramap(invoice -> invoice.order());
Use map When:
- Same source, different target format - You want to transform the output
- Data presentation - Converting raw data to display formats
- Type strengthening - Wrapping raw values in type-safe containers
java
// Perfect for presentation formatting
Lens<Product, String> formattedPriceLens =
ProductLenses.price().map(price -> "£" + price.setScale(2));
Use dimap When:
- Complete format conversion - Both input and output need transformation
- API integration - External systems use completely different data structures
- Legacy system support - Bridging between old and new data formats
- Data migration - Supporting multiple data representations simultaneously
java
// Perfect for API integration
Traversal<ApiUserDto, String> apiRolesTraversal =
UserTraversals.roles().dimap(
dto -> convertApiDtoToUser(dto),
userLogin -> convertUserToApiDto(userLogin)
);
Common Pitfalls
❌ Don't Do This:
// Creating adapters inline repeatedly
var lens1 = PersonLenses.firstName().contramap(emp -> emp.person());
var lens2 = PersonLenses.firstName().contramap(emp -> emp.person());
var lens3 = PersonLenses.firstName().contramap(emp -> emp.person());
// Over-adapting simple cases
Lens<Person, String> nameUpper = PersonLenses.firstName()
.map(String::toUpperCase)
.map(s -> s.trim())
.map(s -> s.replace(" ", "_")); // Just write one function!
// Forgetting null safety in conversions
Lens<EmployeeDto, String> unsafeLens = PersonLenses.firstName()
.contramap(dto -> dto.person()); // What if dto.person() is null?
// Complex conversions without error handling
Traversal<String, LocalDate> fragileParser =
Iso.of(LocalDate::toString, LocalDate::parse).asTraversal()
.contramap(complexString -> extractDatePart(complexString)); // Might throw!
✅ Do This Instead:
// Create adapters once, reuse everywhere
public static final Lens<Employee, String> EMPLOYEE_FIRST_NAME =
PersonLenses.firstName().contramap(Employee::personalInfo);
// Combine transformations efficiently
Function<String, String> normalise = name ->
name.toUpperCase().trim().replace(" ", "_");
Lens<Person, String> normalisedNameLens = PersonLenses.firstName().map(normalise);
// Handle null safety explicitly
Lens<EmployeeDto, Optional<String>> safeNameLens = PersonLenses.firstName()
.contramap((EmployeeDto dto) -> Optional.ofNullable(dto.person()))
.map(Optional::of);
// Use safe conversions with proper error handling
Function<String, Either<String, LocalDate>> safeParse = str -> {
try {
return Either.right(LocalDate.parse(extractDatePart(str)));
} catch (Exception e) {
return Either.left("Invalid date: " + str);
}
};
Performance Notes
Profunctor adaptations are designed for efficiency:
- Automatic fusion: Multiple
contramapormapoperations are automatically combined - Lazy evaluation: Conversions only happen when the optic is actually used
- No boxing overhead: Simple transformations are inlined by the JVM
- Reusable adapters: Create once, use many times without additional overhead
Best Practice: Create adapted optics as constants and reuse them:
public class OpticAdapters {
// Create once, use everywhere
public static final Lens<Employee, String> FIRST_NAME =
PersonLenses.firstName().contramap(Employee::personalInfo);
public static final Lens<Employee, String> FORMATTED_BIRTH_DATE =
PersonLenses.birthDate()
.contramap(Employee::personalInfo)
.map(date -> date.format(DateTimeFormatter.DD_MM_YYYY));
public static final Traversal<CompanyDto, String> EMPLOYEE_EMAILS =
CompanyTraversals.employees()
.contramap((CompanyDto dto) -> convertDtoToCompany(dto))
.andThen(EmployeeTraversals.contacts())
.andThen(ContactLenses.email().asTraversal());
}
Real-World Example: API Integration
Let's explore a comprehensive example where you need to integrate with an external API that uses different field names and data structures than your internal models.
The Scenario: Your internal system uses Employee records, but the external API expects EmployeeDto objects with different field names:
// Internal model
@GenerateLenses
@GenerateTraversals
public record Employee(int id, Person personalInfo, String department) {}
@GenerateLenses
@GenerateTraversals
public record Person(String firstName, String lastName, LocalDate birthDate, List<String> skills) {}
// External API model
@GenerateLenses
public record EmployeeDto(int employeeId, PersonDto person, String dept) {}
@GenerateLenses
public record PersonDto(String fullName, String birthDateString, List<String> expertise) {}
The Solution: Create an adapter that converts between these formats while reusing your existing optics:
public class ApiIntegration {
// Conversion utilities
private static Employee dtoToEmployee(EmployeeDto dto) {
PersonDto personDto = dto.person();
String[] nameParts = personDto.fullName().split(" ", 2);
Person person = new Person(
nameParts[0],
nameParts.length > 1 ? nameParts[1] : "",
LocalDate.parse(personDto.birthDateString()),
personDto.expertise()
);
return new Employee(dto.employeeId(), person, dto.dept());
}
private static EmployeeDto employeeToDto(Employee employee) {
Person person = employee.personalInfo();
PersonDto personDto = new PersonDto(
person.firstName() + " " + person.lastName(),
person.birthDate().toString(),
person.skills()
);
return new EmployeeDto(employee.id(), personDto, employee.department());
}
// Adapted optics for API integration
public static final Lens<EmployeeDto, String> API_EMPLOYEE_DEPARTMENT =
EmployeeLenses.department().dimap(
ApiIntegration::dtoToEmployee,
ApiIntegration::employeeToDto
);
public static final Lens<EmployeeDto, String> API_EMPLOYEE_FIRST_NAME =
EmployeeLenses.personalInfo()
.andThen(PersonLenses.firstName())
.dimap(
ApiIntegration::dtoToEmployee,
ApiIntegration::employeeToDto
);
public static final Traversal<EmployeeDto, String> API_EMPLOYEE_SKILLS =
EmployeeTraversals.personalInfo()
.andThen(PersonTraversals.skills())
.dimap(
ApiIntegration::dtoToEmployee,
ApiIntegration::employeeToDto
);
// Use the adapters seamlessly with external data
public void processApiData(EmployeeDto externalEmployee) {
// Update department using existing business logic
EmployeeDto promoted = API_EMPLOYEE_DEPARTMENT.modify(
dept -> "Senior " + dept,
externalEmployee
);
// Normalise skills using existing traversal logic
EmployeeDto normalisedSkills = Traversals.modify(
API_EMPLOYEE_SKILLS,
skill -> skill.toLowerCase().trim(),
externalEmployee
);
sendToApi(promoted);
sendToApi(normalisedSkills);
}
}
Working with Type-Safe Wrappers
Another powerful use case is adapting optics to work with strongly-typed wrapper classes while maintaining type safety.
The Challenge: You want to use string manipulation functions on wrapper types:
// Strongly-typed wrappers
public record UserId(String value) {}
public record UserName(String value) {}
public record Email(String value) {}
@GenerateLenses
public record User(UserId id, UserName name, Email email, LocalDate createdAt) {}
The Solution: Create adapted lenses that unwrap and rewrap values:
public class WrapperAdapters {
// Generic wrapper lens creator
public static <W> Lens<W, String> stringWrapperLens(
Function<W, String> unwrap,
Function<String, W> wrap
) {
return Lens.of(unwrap, (wrapper, newValue) -> wrap.apply(newValue));
}
// Specific wrapper lenses
public static final Lens<UserId, String> USER_ID_STRING =
stringWrapperLens(UserId::value, UserId::new);
public static final Lens<UserName, String> USER_NAME_STRING =
stringWrapperLens(UserName::value, UserName::new);
public static final Lens<Email, String> EMAIL_STRING =
stringWrapperLens(Email::value, Email::new);
// Composed lenses for User operations
public static final Lens<User, String> USER_NAME_VALUE =
UserLenses.name().andThen(USER_NAME_STRING);
public static final Lens<User, String> USER_EMAIL_VALUE =
UserLenses.email().andThen(EMAIL_STRING);
// Usage examples
public User normaliseUser(User userLogin) {
return USER_NAME_VALUE.modify(name ->
Arrays.stream(name.toLowerCase().split(" "))
.map(word -> Character.toUpperCase(word.charAt(0)) + word.substring(1))
.collect(joining(" ")),
userLogin
);
}
public User updateEmailDomain(User userLogin, String newDomain) {
return USER_EMAIL_VALUE.modify(email -> {
String localPart = email.substring(0, email.indexOf('@'));
return localPart + "@" + newDomain;
}, userLogin);
}
}
Migration Patterns
Profunctor adaptations are particularly valuable during system migrations:
Legacy System Integration
// You have optics for PersonV1, but data is now PersonV2
public record PersonV1(String name, int age) {}
@GenerateLenses
public record PersonV2(String firstName, String lastName, LocalDate birthDate) {}
public class MigrationAdapters {
// Convert between versions
private static PersonV1 v2ToV1(PersonV2 v2) {
return new PersonV1(
v2.firstName() + " " + v2.lastName(),
Period.between(v2.birthDate(), LocalDate.now()).getYears()
);
}
private static PersonV2 v1ToV2(PersonV1 v1) {
String[] nameParts = v1.name().split(" ", 2);
return new PersonV2(
nameParts[0],
nameParts.length > 1 ? nameParts[1] : "",
LocalDate.now().minusYears(v1.age())
);
}
// Existing V1 optics work with V2 data
public static final Lens<PersonV2, String> V2_NAME_FROM_V1_LENS =
// Assume we have a V1 name lens
Lens.of(PersonV1::name, (p1, name) -> new PersonV1(name, p1.age()))
.dimap(MigrationAdapters::v2ToV1, MigrationAdapters::v1ToV2);
}
Database Schema Evolution
// Old database entity
public record CustomerEntityV1(Long id, String name, String email) {}
// New database entity
@GenerateLenses
public record CustomerEntityV2(Long id, String firstName, String lastName, String emailAddress, boolean active) {}
public class SchemaAdapters {
// Adapter for name field
public static final Lens<CustomerEntityV2, String> FULL_NAME_ADAPTER =
Lens.of(CustomerEntityV1::name, (v1, name) -> new CustomerEntityV1(v1.id(), name, v1.email()))
.dimap(
// V2 -> V1 conversion
v2 -> new CustomerEntityV1(v2.id(), v2.firstName() + " " + v2.lastName(), v2.emailAddress()),
// V1 -> V2 conversion
v1 -> {
String[] parts = v1.name().split(" ", 2);
return new CustomerEntityV2(
v1.id(),
parts[0],
parts.length > 1 ? parts[1] : "",
v1.email(),
true // Default active status
);
}
);
}
Complete, Runnable Example
This comprehensive example demonstrates all three profunctor operations in a realistic scenario:
package org.higherkindedj.example.optics.profunctor;
import org.higherkindedj.optics.Lens;
import org.higherkindedj.optics.Traversal;
import org.higherkindedj.optics.annotations.GenerateLenses;
import org.higherkindedj.optics.annotations.GenerateTraversals;
import org.higherkindedj.optics.util.Traversals;
import java.time.LocalDate;
import java.time.format.DateTimeFormatter;
import java.util.List;
import java.util.Optional;
public class OpticProfunctorExample {
// Internal data model
@GenerateLenses
@GenerateTraversals
public record Person(String firstName, String lastName, LocalDate birthDate, List<String> hobbies) {}
@GenerateLenses
public record Employee(int id, Person personalInfo, String department) {}
// External API model
@GenerateLenses
public record PersonDto(String fullName, String birthDateString, List<String> interests) {}
@GenerateLenses
public record EmployeeDto(int employeeId, PersonDto person, String dept) {}
// Type-safe wrapper
public record UserId(long value) {}
@GenerateLenses
public record UserProfile(UserId id, String displayName, boolean active) {}
public static void main(String[] args) {
System.out.println("=== PROFUNCTOR OPTICS EXAMPLE ===");
// Test data
var person = new Person("Alice", "Johnson",
LocalDate.of(1985, 6, 15),
List.of("reading", "cycling", "photography"));
var employee = new Employee(123, person, "Engineering");
// --- SCENARIO 1: contramap - Adapt source type ---
System.out.println("--- Scenario 1: contramap (Source Adaptation) ---");
// Original lens works on Person, adapt it for Employee
Lens<Person, String> firstNameLens = PersonLenses.firstName();
Lens<Employee, String> employeeFirstNameLens =
firstNameLens.contramap(Employee::personalInfo);
String name = employeeFirstNameLens.get(employee);
Employee renamedEmployee = employeeFirstNameLens.set("Alicia", employee);
System.out.println("Original employee: " + employee);
System.out.println("Extracted name: " + name);
System.out.println("Renamed employee: " + renamedEmployee);
System.out.println();
// --- SCENARIO 2: map - Adapt target type ---
System.out.println("--- Scenario 2: map (Target Adaptation) ---");
// Original lens returns LocalDate, adapt it to return formatted string
Lens<Person, LocalDate> birthDateLens = PersonLenses.birthDate();
Lens<Person, String> birthDateStringLens =
birthDateLens.map(date -> date.format(DateTimeFormatter.ISO_LOCAL_DATE));
String formattedDate = birthDateStringLens.get(person);
// Note: set operation would need to parse the string back to LocalDate
System.out.println("Person: " + person);
System.out.println("Formatted birth date: " + formattedDate);
System.out.println();
// --- SCENARIO 3: dimap - Adapt both source and target ---
System.out.println("--- Scenario 3: dimap (Both Source and Target Adaptation) ---");
// Convert between internal Person and external PersonDto
Traversal<Person, String> hobbiesTraversal = PersonTraversals.hobbies();
Traversal<PersonDto, String> interestsTraversal = hobbiesTraversal.dimap(
// PersonDto -> Person
dto -> {
String[] nameParts = dto.fullName().split(" ", 2);
return new Person(
nameParts[0],
nameParts.length > 1 ? nameParts[1] : "",
LocalDate.parse(dto.birthDateString()),
dto.interests()
);
},
// Person -> PersonDto
p -> new PersonDto(
p.firstName() + " " + p.lastName(),
p.birthDate().toString(),
p.hobbies()
)
);
var personDto = new PersonDto("Bob Smith", "1990-03-20",
List.of("gaming", "cooking", "travel"));
List<String> extractedInterests = Traversals.getAll(interestsTraversal, personDto);
PersonDto updatedDto = Traversals.modify(interestsTraversal,
interest -> interest.toUpperCase(), personDto);
System.out.println("Original DTO: " + personDto);
System.out.println("Extracted interests: " + extractedInterests);
System.out.println("Updated DTO: " + updatedDto);
System.out.println();
// --- SCENARIO 4: Working with wrapper types ---
System.out.println("--- Scenario 4: Wrapper Type Integration ---");
// Create a lens that works directly with the wrapped value
Lens<UserId, Long> userIdValueLens = Lens.of(UserId::value, (id, newValue) -> new UserId(newValue));
Lens<UserProfile, Long> profileIdValueLens =
UserProfileLenses.id().andThen(userIdValueLens);
var userProfile = new UserProfile(new UserId(456L), "Alice J.", true);
Long idValue = profileIdValueLens.get(userProfile);
UserProfile updatedProfile = profileIdValueLens.modify(id -> id + 1000, userProfile);
System.out.println("Original profile: " + userProfile);
System.out.println("Extracted ID value: " + idValue);
System.out.println("Updated profile: " + updatedProfile);
System.out.println();
// --- SCENARIO 5: Chaining adaptations ---
System.out.println("--- Scenario 5: Chaining Adaptations ---");
// Chain multiple adaptations: Employee -> Person -> String (formatted)
Lens<Employee, String> formattedEmployeeName =
PersonLenses.firstName()
.contramap(Employee::personalInfo) // Employee -> Person
.map(name -> "Mr/Ms. " + name.toUpperCase()); // String -> Formatted String
String formalName = formattedEmployeeName.get(employee);
Employee formalEmployee = formattedEmployeeName.set("Mr/Ms. ROBERT", employee);
System.out.println("Original employee: " + employee);
System.out.println("Formal name: " + formalName);
System.out.println("Employee with formal name: " + formalEmployee);
System.out.println();
// --- SCENARIO 6: Safe adaptations with Optional ---
System.out.println("--- Scenario 6: Safe Adaptations ---");
// Handle potentially null fields safely
Lens<Optional<Person>, Optional<String>> safeNameLens =
PersonLenses.firstName()
.map(Optional::of)
.contramap(optPerson -> optPerson.orElse(new Person("", "", LocalDate.now(), List.of())));
Optional<Person> maybePerson = Optional.of(person);
Optional<Person> emptyPerson = Optional.empty();
Optional<String> safeName1 = safeNameLens.get(maybePerson);
Optional<String> safeName2 = safeNameLens.get(emptyPerson);
System.out.println("Safe name from present person: " + safeName1);
System.out.println("Safe name from empty person: " + safeName2);
}
}
Expected Output:
=== PROFUNCTOR OPTICS EXAMPLE ===
--- Scenario 1: contramap (Source Adaptation) ---
Original employee: Employee[id=123, personalInfo=Person[firstName=Alice, lastName=Johnson, birthDate=1985-06-15, hobbies=[reading, cycling, photography]], department=Engineering]
Extracted name: Alice
Renamed employee: Employee[id=123, personalInfo=Person[firstName=Alicia, lastName=Johnson, birthDate=1985-06-15, hobbies=[reading, cycling, photography]], department=Engineering]
--- Scenario 2: map (Target Adaptation) ---
Person: Person[firstName=Alice, lastName=Johnson, birthDate=1985-06-15, hobbies=[reading, cycling, photography]]
Formatted birth date: 1985-06-15
--- Scenario 3: dimap (Both Source and Target Adaptation) ---
Original DTO: PersonDto[fullName=Bob Smith, birthDateString=1990-03-20, interests=[gaming, cooking, travel]]
Extracted interests: [gaming, cooking, travel]
Updated DTO: PersonDto[fullName=Bob Smith, birthDateString=1990-03-20, interests=[GAMING, COOKING, TRAVEL]]
--- Scenario 4: Wrapper Type Integration ---
Original profile: UserProfile[id=UserId[value=456], displayName=Alice J., active=true]
Extracted ID value: 456
Updated profile: UserProfile[id=UserId[value=1456], displayName=Alice J., active=true]
--- Scenario 5: Chaining Adaptations ---
Original employee: Employee[id=123, personalInfo=Person[firstName=Alice, lastName=Johnson, birthDate=1985-06-15, hobbies=[reading, cycling, photography]], department=Engineering]
Formal name: Mr/Ms. ALICE
Employee with formal name: Employee[id=123, personalInfo=Person[firstName=ROBERT, lastName=Johnson, birthDate=1985-06-15, hobbies=[reading, cycling, photography]], department=Engineering]
--- Scenario 6: Safe Adaptations ---
Safe name from present person: Optional[Alice]
Safe name from empty person: Optional[]
Integration with Existing Optics
Profunctor adaptations work seamlessly with all the optic types and features you've already learned:
With Effectful Updates
// Original effectful lens
Lens<Person, String> emailLens = PersonLenses.email();
// Adapt it for Employee and use with validation
Lens<Employee, String> employeeEmailLens = emailLens.contramap(Employee::personalInfo);
// Use with effectful validation as normal
Kind<ValidatedKind.Witness<String>, Employee> result =
employeeEmailLens.modifyF(this::validateEmail, employee, validatedApplicative);
With Deep Composition
// Compose adapted optics just like regular optics
Traversal<EmployeeDto, String> deepPath =
apiAdapter.asTraversal()
.andThen(PersonTraversals.hobbies())
.andThen(stringProcessor);
This profunctor capability makes higher-kinded-j optics incredibly flexible and reusable, allowing you to adapt existing, well-tested optics to work with new data formats and requirements without rewriting your core business logic.
Previous: Setters: Composable Write-Only Modifications Next: Capstone Example: Deep Validation
Capstone Example:
Composing Optics for Deep Validation
- How to compose multiple optic types into powerful processing pipelines
- Building type-safe validation workflows with error accumulation
- Using
asTraversal()to ensure safe optic composition - Creating reusable validation paths with effectful operations
- Simplified validation with
modifyAllValidated,modifyAllEither, andmodifyMaybe - Understanding when composition is superior to manual validation logic
- Advanced patterns for multi-level and conditional validation scenarios
In the previous guides, we explored each core optic—Lens, Prism, Iso and Traversal—as individual tools. We've seen how they provide focused, reusable, and composable access to immutable data.
Now, it's time to put it all together.
This guide showcases the true power of the optics approach by composing multiple different optics to solve a single, complex, real-world problem: performing deep, effectful validation on a nested data structure.
The Scenario: Validating User Permissions
Imagine a data model for a form that can be filled out by either a registered User or a Guest. Our goal is to validate that every Permission held by a User has a valid name.
This single task requires us to:
- Focus on the form's
principalfield (a job for a Lens). - Safely "select" the
Usercase, ignoring anyGuests (a job for a Prism). - Operate on every
Permissionin the userLogin's list (a job for a Traversal).
Think of This Composition Like...
- A telescope with multiple lenses: Each optic focuses deeper into the data structure
- A manufacturing pipeline: Each stage processes and refines the data further
- A filter chain: Data flows through multiple filters, each handling a specific concern
- A surgical procedure: Precise, layered operations that work together for a complex outcome
1. The Data Model
Here is the nested data structure, annotated to generate all the optics we will need.
import org.higherkindedj.optics.annotations.GenerateLenses;
import org.higherkindedj.optics.annotations.GeneratePrisms;
import org.higherkindedj.optics.annotations.GenerateTraversals;
import java.util.List;
@GenerateLenses
public record Permission(String name) {}
@GeneratePrisms
public sealed interface Principal {}
@GenerateLenses
@GenerateTraversals
public record User(String username, List<Permission> permissions) implements Principal {}
public record Guest() implements Principal {}
@GenerateLenses
public record Form(int formId, Principal principal) {}
2. The Validation Logic
Our validation function will take a permission name (String) and return a Validated<String, String>. The Validated applicative functor will automatically handle accumulating any errors found.
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.validated.Validated;
import org.higherkindedj.hkt.validated.ValidatedKind;
import static org.higherkindedj.hkt.validated.ValidatedKindHelper.VALIDATED;
import java.util.Set;
private static final Set<String> VALID_PERMISSIONS = Set.of("PERM_READ", "PERM_WRITE", "PERM_DELETE");
public static Kind<ValidatedKind.Witness<String>, String> validatePermissionName(String name) {
if (VALID_PERMISSIONS.contains(name)) {
return VALIDATED.widen(Validated.valid(name));
} else {
return VALIDATED.widen(Validated.invalid("Invalid permission: " + name));
}
}
3. Understanding the Composition Strategy
Before diving into the code, let's understand why we need each type of optic and how they work together:
Why a Lens for principal?
- The
principalfield always exists in aForm - We need guaranteed access to focus on this field
- A
Lensprovides exactly this: reliable access to required data
Why a Prism for User?
- The
principalcould be either aUseror aGuest - We only want to validate
Userpermissions, ignoringGuests - A
Prismprovides safe, optional access to specific sum type cases
Why a Traversal for permissions?
- We need to validate every permission in the list
- We want to accumulate all validation errors, not stop at the first one
- A
Traversalprovides bulk operations over collections
Why convert everything to Traversal?
Traversalis the most general optic type- It can represent zero-or-more targets (perfect for our "might be empty" scenario)
- All other optics can be converted to
Traversalfor seamless composition
4. Composing the Master Optic
Now for the main event. We will compose our generated optics to create a single Traversal that declaratively represents the path from a Form all the way down to each permission name. While the new with* helpers are great for simple, shallow updates, a deep and conditional update like this requires composition.
To ensure type-safety across different optic types, we convert each Lens and Prism in the chain to a Traversal using the .asTraversal() method.
import org.higherkindedj.optics.Lens;
import org.higherkindedj.optics.Prism;
import org.higherkindedj.optics.Traversal;
// Get the individual generated optics
Lens<Form, Principal> formPrincipalLens = FormLenses.principal();
Prism<Principal, User> principalUserPrism = PrincipalPrisms.userLogin();
Traversal<User, Permission> userPermissionsTraversal = UserTraversals.permissions();
Lens<Permission, String> permissionNameLens = PermissionLenses.name();
// Compose them into a single, deep Traversal
Traversal<Form, String> formToPermissionNameTraversal =
formPrincipalLens.asTraversal()
.andThen(principalUserPrism.asTraversal())
.andThen(userPermissionsTraversal)
.andThen(permissionNameLens.asTraversal());
This single formToPermissionNameTraversal object now encapsulates the entire complex path.
When to Use Optic Composition vs Other Approaches
Use Optic Composition When:
- Complex nested validation - Multiple levels of data structure with conditional logic
- Reusable validation paths - The same validation logic applies to multiple scenarios
- Type-safe bulk operations - You need to ensure compile-time safety for collection operations
- Error accumulation - You want to collect all errors, not stop at the first failure
// Perfect for reusable, complex validation
Traversal<Company, String> allEmployeeEmails =
CompanyTraversals.departments()
.andThen(DepartmentTraversals.employees())
.andThen(EmployeePrisms.active().asTraversal()) // Only active employees
.andThen(EmployeeLenses.email().asTraversal());
// Use across multiple validation scenarios
Validated<List<String>, Company> result1 = validateEmails(company1);
Validated<List<String>, Company> result2 = validateEmails(company2);
Use Direct Validation When:
- Simple, flat structures - No deep nesting or conditional access needed
- One-off validation - Logic won't be reused elsewhere
- Performance critical - Minimal abstraction overhead required
// Simple validation doesn't need optics
public Validated<String, User> validateUser(User userLogin) {
if (userLogin.username().length() < 3) {
return Validated.invalid("Username too short");
}
return Validated.valid(userLogin);
}
Use Stream Processing When:
- Complex transformations - Multiple operations that don't map to optic patterns
- Aggregation logic - Computing statistics or summaries
- Filtering and collecting - Changing the structure of collections
// Better with streams for aggregation
Map<String, Long> permissionCounts = forms.stream()
.map(Form::principal)
.filter(User.class::isInstance)
.map(User.class::cast)
.flatMap(userLogin -> userLogin.permissions().stream())
.collect(groupingBy(Permission::name, counting()));
Common Pitfalls
❌ Don't Do This:
// Over-composing simple cases
Traversal<Form, Integer> formIdTraversal = FormLenses.formId().asTraversal();
// Just use: form.formId()
// Forgetting error accumulation setup
// This won't accumulate errors properly without the right Applicative
var badResult = traversal.modifyF(validatePermissionName, form, /* wrong applicative */);
// Creating complex compositions inline
var inlineResult = FormLenses.principal().asTraversal()
.andThen(PrincipalPrisms.userLogin().asTraversal())
.andThen(UserTraversals.permissions())
.andThen(PermissionLenses.name().asTraversal())
.modifyF(validatePermissionName, form, applicative); // Hard to read and reuse
// Ignoring the path semantics
// This tries to validate ALL strings, not just permission names
Traversal<Form, String> badTraversal = /* any string traversal */;
✅ Do This Instead:
// Use direct access for simple cases
int formId = form.formId(); // Clear and direct
// Set up error accumulation properly
Applicative<ValidatedKind.Witness<String>> validatedApplicative =
ValidatedMonad.instance(Semigroups.string("; "));
// Create reusable, well-named compositions
public static final Traversal<Form, String> FORM_TO_PERMISSION_NAMES =
FormLenses.principal().asTraversal()
.andThen(PrincipalPrisms.userLogin().asTraversal())
.andThen(UserTraversals.permissions())
.andThen(PermissionLenses.name().asTraversal());
// Use the well-named traversal
var result = FORM_TO_PERMISSION_NAMES.modifyF(validatePermissionName, form, validatedApplicative);
// Be specific about what you're validating
// This traversal has clear semantics: Form -> User permissions -> permission names
Performance Notes
Optic composition is designed for efficiency:
- Lazy evaluation: Only processes data when actually used
- Structural sharing: Unchanged parts of data structures are reused
- Single-pass processing:
modifyFtraverses the structure only once - Memory efficient: Only creates new objects for changed data
- JIT compiler optimisation: Complex compositions are optimised by the JVM's just-in-time compiler through method inlining
Best Practice: Create composed optics as constants for reuse:
public class ValidationOptics {
// Reusable validation paths
public static final Traversal<Form, String> USER_PERMISSION_NAMES =
FormLenses.principal().asTraversal()
.andThen(PrincipalPrisms.userLogin().asTraversal())
.andThen(UserTraversals.permissions())
.andThen(PermissionLenses.name().asTraversal());
public static final Traversal<Company, String> EMPLOYEE_EMAILS =
CompanyTraversals.employees()
.andThen(EmployeeLenses.contactInfo().asTraversal())
.andThen(ContactInfoLenses.email().asTraversal());
// Helper methods for common validations
public static Validated<List<String>, Form> validatePermissions(Form form) {
return VALIDATED.narrow(USER_PERMISSION_NAMES.modifyF(
ValidationOptics::validatePermissionName,
form,
getValidatedApplicative()
));
}
}
Advanced Composition Patterns
1. Multi-Level Validation
// Validate both userLogin data AND permissions in one pass
public static Validated<List<String>, Form> validateFormCompletely(Form form) {
// First validate userLogin basic info
var userValidation = FormLenses.principal().asTraversal()
.andThen(PrincipalPrisms.userLogin().asTraversal())
.andThen(UserLenses.username().asTraversal())
.modifyF(ValidationOptics::validateUsername, form, getValidatedApplicative());
// Then validate permissions
var permissionValidation = FORM_TO_PERMISSION_NAMES
.modifyF(ValidationOptics::validatePermissionName, form, getValidatedApplicative());
// Combine both validations
return VALIDATED.narrow(getValidatedApplicative().map2(
userValidation,
permissionValidation,
(validForm1, validForm2) -> validForm2 // Return the final form
));
}
2. Conditional Validation Paths
// Different validation rules for different userLogin types
public static final Traversal<Form, String> ADMIN_USER_PERMISSIONS =
FormLenses.principal().asTraversal()
.andThen(PrincipalPrisms.userLogin().asTraversal())
.andThen(UserPrisms.adminUser().asTraversal()) // Only admin users
.andThen(AdminUserTraversals.permissions())
.andThen(PermissionLenses.name().asTraversal());
public static final Traversal<Form, String> REGULAR_USER_PERMISSIONS =
FormLenses.principal().asTraversal()
.andThen(PrincipalPrisms.userLogin().asTraversal())
.andThen(UserPrisms.regularUser().asTraversal()) // Only regular users
.andThen(RegularUserTraversals.permissions())
.andThen(PermissionLenses.name().asTraversal());
3. Cross-Field Validation
// Validate that permissions are appropriate for userLogin role
public static Validated<List<String>, Form> validatePermissionsForRole(Form form) {
return FormLenses.principal().asTraversal()
.andThen(PrincipalPrisms.userLogin().asTraversal())
.modifyF(userLogin -> {
// Custom validation that checks both role and permissions
Set<String> allowedPerms = getAllowedPermissionsForRole(userLogin.role());
List<String> errors = userLogin.permissions().stream()
.map(Permission::name)
.filter(perm -> !allowedPerms.contains(perm))
.map(perm -> "Permission '" + perm + "' not allowed for role " + userLogin.role())
.toList();
return errors.isEmpty()
? VALIDATED.widen(Validated.valid(userLogin))
: VALIDATED.widen(Validated.invalid(String.join("; ", errors)));
}, form, getValidatedApplicative());
}
Complete, Runnable Example
With our composed Traversal, we can now use modifyF to run our validation logic. The Traversal handles the navigation and filtering, while the Validated applicative (created with a Semigroup for joining error strings) handles the effects and error accumulation.
package org.higherkindedj.example.optics;
import static org.higherkindedj.hkt.validated.ValidatedKindHelper.VALIDATED;
import java.util.List;
import java.util.Set;
import org.higherkindedj.hkt.Applicative;
import org.higherkindedj.hkt.Kind;
import org.higherkindedj.hkt.Semigroups;
import org.higherkindedj.hkt.validated.Validated;
import org.higherkindedj.hkt.validated.ValidatedKind;
import org.higherkindedj.hkt.validated.ValidatedMonad;
import org.higherkindedj.optics.Lens;
import org.higherkindedj.optics.Prism;
import org.higherkindedj.optics.Traversal;
import org.higherkindedj.optics.annotations.GenerateLenses;
import org.higherkindedj.optics.annotations.GeneratePrisms;
import org.higherkindedj.optics.annotations.GenerateTraversals;
public class ValidatedTraversalExample {
// --- Data Model ---
@GenerateLenses
public record Permission(String name) {}
@GeneratePrisms
public sealed interface Principal {}
@GenerateLenses
@GenerateTraversals
public record User(String username, List<Permission> permissions) implements Principal {}
public record Guest() implements Principal {}
@GenerateLenses
public record Form(int formId, Principal principal) {}
// --- Validation Logic ---
private static final Set<String> VALID_PERMISSIONS = Set.of("PERM_READ", "PERM_WRITE", "PERM_DELETE");
public static Kind<ValidatedKind.Witness<String>, String> validatePermissionName(String name) {
if (VALID_PERMISSIONS.contains(name)) {
return VALIDATED.widen(Validated.valid(name));
} else {
return VALIDATED.widen(Validated.invalid("Invalid permission: " + name));
}
}
// --- Reusable Optic Compositions ---
public static final Traversal<Form, String> FORM_TO_PERMISSION_NAMES =
FormLenses.principal().asTraversal()
.andThen(PrincipalPrisms.userLogin().asTraversal())
.andThen(UserTraversals.permissions())
.andThen(PermissionLenses.name().asTraversal());
// --- Helper Methods ---
private static Applicative<ValidatedKind.Witness<String>> getValidatedApplicative() {
return ValidatedMonad.instance(Semigroups.string("; "));
}
public static Validated<String, Form> validateFormPermissions(Form form) {
Kind<ValidatedKind.Witness<String>, Form> result =
FORM_TO_PERMISSION_NAMES.modifyF(
ValidatedTraversalExample::validatePermissionName,
form,
getValidatedApplicative()
);
return VALIDATED.narrow(result);
}
public static void main(String[] args) {
System.out.println("=== OPTIC COMPOSITION VALIDATION EXAMPLE ===");
System.out.println();
// --- SCENARIO 1: Form with valid permissions ---
System.out.println("--- Scenario 1: Valid Permissions ---");
var validUser = new User("alice", List.of(
new Permission("PERM_READ"),
new Permission("PERM_WRITE")
));
var validForm = new Form(1, validUser);
System.out.println("Input: " + validForm);
Validated<String, Form> validResult = validateFormPermissions(validForm);
System.out.println("Result: " + validResult);
System.out.println();
// --- SCENARIO 2: Form with multiple invalid permissions ---
System.out.println("--- Scenario 2: Multiple Invalid Permissions ---");
var invalidUser = new User("charlie", List.of(
new Permission("PERM_EXECUTE"), // Invalid
new Permission("PERM_WRITE"), // Valid
new Permission("PERM_SUDO"), // Invalid
new Permission("PERM_READ") // Valid
));
var multipleInvalidForm = new Form(3, invalidUser);
System.out.println("Input: " + multipleInvalidForm);
Validated<String, Form> invalidResult = validateFormPermissions(multipleInvalidForm);
System.out.println("Result (errors accumulated): " + invalidResult);
System.out.println();
// --- SCENARIO 3: Form with Guest principal (no targets for traversal) ---
System.out.println("--- Scenario 3: Guest Principal (No Validation Targets) ---");
var guestForm = new Form(4, new Guest());
System.out.println("Input: " + guestForm);
Validated<String, Form> guestResult = validateFormPermissions(guestForm);
System.out.println("Result (path does not match): " + guestResult);
System.out.println();
// --- SCENARIO 4: Form with empty permissions list ---
System.out.println("--- Scenario 4: Empty Permissions List ---");
var emptyPermissionsUser = new User("diana", List.of());
var emptyPermissionsForm = new Form(5, emptyPermissionsUser);
System.out.println("Input: " + emptyPermissionsForm);
Validated<String, Form> emptyResult = validateFormPermissions(emptyPermissionsForm);
System.out.println("Result (empty list): " + emptyResult);
System.out.println();
// --- SCENARIO 5: Demonstrating optic reusability ---
System.out.println("--- Scenario 5: Optic Reusability ---");
List<Form> formsToValidate = List.of(validForm, multipleInvalidForm, guestForm);
System.out.println("Batch validation results:");
formsToValidate.forEach(form -> {
Validated<String, Form> result = validateFormPermissions(form);
String status = result.isValid() ? "✓ VALID" : "✗ INVALID";
System.out.println(" Form " + form.formId() + ": " + status);
if (result.isInvalid()) {
// Fix: Use getError() instead of getInvalid()
System.out.println(" Errors: " + result.getError());
}
});
System.out.println();
// --- SCENARIO 6: Alternative validation with different error accumulation ---
System.out.println("--- Scenario 6: Different Error Accumulation Strategy ---");
// Use list-based error accumulation instead of string concatenation
Applicative<ValidatedKind.Witness<List<String>>> listApplicative =
ValidatedMonad.instance(Semigroups.list());
// Fix: Create a proper function for list validation
java.util.function.Function<String, Kind<ValidatedKind.Witness<List<String>>, String>> listValidation =
name -> VALID_PERMISSIONS.contains(name)
? VALIDATED.widen(Validated.valid(name))
: VALIDATED.widen(Validated.invalid(List.of("Invalid permission: " + name)));
Kind<ValidatedKind.Witness<List<String>>, Form> listResult =
FORM_TO_PERMISSION_NAMES.modifyF(listValidation, multipleInvalidForm, listApplicative);
System.out.println("Input: " + multipleInvalidForm);
System.out.println("Result with list accumulation: " + VALIDATED.narrow(listResult));
}
}
Expected Output:
=== OPTIC COMPOSITION VALIDATION EXAMPLE ===
--- Scenario 1: Valid Permissions ---
Input: Form[formId=1, principal=User[username=alice, permissions=[Permission[name=PERM_READ], Permission[name=PERM_WRITE]]]]
Result: Valid(Form[formId=1, principal=User[username=alice, permissions=[Permission[name=PERM_READ], Permission[name=PERM_WRITE]]]])
--- Scenario 2: Multiple Invalid Permissions ---
Input: Form[formId=3, principal=User[username=charlie, permissions=[Permission[name=PERM_EXECUTE], Permission[name=PERM_WRITE], Permission[name=PERM_SUDO], Permission[name=PERM_READ]]]]
Result (errors accumulated): Invalid(Invalid permission: PERM_EXECUTE; Invalid permission: PERM_SUDO)
--- Scenario 3: Guest Principal (No Validation Targets) ---
Input: Form[formId=4, principal=Guest[]]
Result (path does not match): Valid(Form[formId=4, principal=Guest[]])
--- Scenario 4: Empty Permissions List ---
Input: Form[formId=5, principal=User[username=diana, permissions=[]]]
Result (empty list): Valid(Form[formId=5, principal=User[username=diana, permissions=[]]])
--- Scenario 5: Optic Reusability ---
Batch validation results:
Form 1: ✓ VALID
Form 3: ✗ INVALID
Errors: Invalid permission: PERM_EXECUTE; Invalid permission: PERM_SUDO
Form 4: ✓ VALID
--- Scenario 6: Different Error Accumulation Strategy ---
Input: Form[formId=3, principal=User[username=charlie, permissions=[Permission[name=PERM_EXECUTE], Permission[name=PERM_WRITE], Permission[name=PERM_SUDO], Permission[name=PERM_READ]]]]
Result with list accumulation: Invalid([Invalid permission: PERM_EXECUTE, Invalid permission: PERM_SUDO])
This shows how our single, composed optic correctly handled all cases: it accumulated multiple failures into a single Invalid result, and it correctly did nothing (resulting in a Valid state) when the path did not match. This is the power of composing simple, reusable optics to solve complex problems in a safe, declarative, and boilerplate-free way.
Why This Approach is Powerful
This capstone example demonstrates several key advantages of the optics approach:
Declarative Composition
The formToPermissionNameTraversal reads like a clear path specification: "From a Form, go to the principal, if it's a User, then to each permission, then to each name." This is self-documenting code.
Type Safety
Every step in the composition is checked at compile time. It's impossible to accidentally apply permission validation to Guest data or to skip the User filtering step.
Automatic Error Accumulation
The Validated applicative automatically collects all validation errors without us having to write any error-handling boilerplate. We get comprehensive validation reports for free.
Reusability
The same composed optic can be used for validation, data extraction, transformation, or any other operation. We write the path once and reuse it everywhere.
Composability
Each individual optic (Lens, Prism, Traversal) can be tested and reasoned about independently, then composed to create more complex behaviour.
Graceful Handling of Edge Cases
The composition automatically handles empty collections, missing data, and type mismatches without special case code.
By mastering optic composition, you gain a powerful tool for building robust, maintainable data processing pipelines that are both expressive and efficient.
Modern Simplification: Validation-Aware Methods
Higher-kinded-j provides specialised validation methods that simplify the patterns shown above. These methods eliminate the need for explicit Applicative setup whilst maintaining full type safety and error accumulation capabilities.
The Traditional Approach (Revisited)
In the examples above, we used the general modifyF method with explicit Applicative configuration:
// Traditional approach: requires explicit Applicative setup
Applicative<ValidatedKind.Witness<String>> applicative =
ValidatedMonad.instance(Semigroups.string("; "));
Kind<ValidatedKind.Witness<String>, Form> result =
FORM_TO_PERMISSION_NAMES.modifyF(
ValidatedTraversalExample::validatePermissionName,
form,
applicative
);
Validated<String, Form> validated = VALIDATED.narrow(result);
Whilst powerful and flexible, this approach requires:
- Understanding of
Applicativefunctors - Manual creation of the
Applicativeinstance - Explicit narrowing of
Kindresults - Knowledge of
Witnesstypes and HKT encoding
The Simplified Approach: Validation-Aware Methods
The new validation-aware methods provide a more direct API for common validation patterns:
1. Error Accumulation with modifyAllValidated
Simplifies the most common case: validating multiple fields and accumulating all errors.
import static org.higherkindedj.optics.fluent.OpticOps.modifyAllValidated;
// Simplified: direct Validated result, automatic error accumulation
Validated<List<String>, Form> result = modifyAllValidated(
FORM_TO_PERMISSION_NAMES,
name -> VALID_PERMISSIONS.contains(name)
? Validated.valid(name)
: Validated.invalid(List.of("Invalid permission: " + name)),
form
);
Benefits:
- No
Applicativesetup required - Direct
Validatedresult (noKindwrapping) - Automatic error accumulation with
List<E> - Clear intent: "validate all and collect errors"
2. Short-Circuit Validation with modifyAllEither
For performance-critical validation that stops at the first error:
import static org.higherkindedj.optics.fluent.OpticOps.modifyAllEither;
// Short-circuit: stops at first error
Either<String, Form> result = modifyAllEither(
FORM_TO_PERMISSION_NAMES,
name -> VALID_PERMISSIONS.contains(name)
? Either.right(name)
: Either.left("Invalid permission: " + name),
form
);
Benefits:
- Stops processing on first error (performance optimisation)
- Direct
Eitherresult - Perfect for fail-fast validation
- No unnecessary computation after failure
Comparison: Traditional vs Validation-Aware Methods
| Aspect | Traditional modifyF | Validation-Aware Methods |
|---|---|---|
| Applicative Setup | ✅ Required (explicit) | ❌ Not required (automatic) |
| Type Complexity | ⚠️ High (Kind, Witness) | ✅ Low (direct types) |
| Error Accumulation | ✅ Yes (via Applicative) | ✅ Yes (modifyAllValidated) |
| Short-Circuiting | ⚠️ Manual (via Either Applicative) | ✅ Built-in (modifyAllEither) |
| Learning Curve | ⚠️ Steep (HKT knowledge) | ✅ Gentle (familiar types) |
| Flexibility | ✅ Maximum (any Applicative) | ⚠️ Focused (common patterns) |
| Boilerplate | ⚠️ More (setup code) | ✅ Less (direct API) |
| Use Case | Generic effectful operations | Validation-specific scenarios |
When to Use Each Approach
Use modifyAllValidated when:
- You need to collect all validation errors
- Building form validation or data quality checks
- Users need comprehensive error reports
// Perfect for form validation
Validated<List<String>, OrderForm> validated = modifyAllValidated(
ORDER_TO_PRICES,
price -> validatePrice(price),
orderForm
);
Use modifyAllEither when:
- You want fail-fast behaviour
- Working in performance-critical paths
- First error is sufficient feedback
// Perfect for quick validation in high-throughput scenarios
Either<String, OrderForm> validated = modifyAllEither(
ORDER_TO_PRICES,
price -> validatePrice(price),
orderForm
);
Use modifyMaybe when:
- Invalid items should be silently filtered
- Building data enrichment pipelines
- Failures are expected and ignorable
// Perfect for optional enrichment
Maybe<OrderForm> enriched = modifyMaybe(
ORDER_TO_OPTIONAL_DISCOUNTS,
discount -> tryApplyDiscount(discount),
orderForm
);
Use traditional modifyF when:
- Working with custom Applicative functors
- Need maximum flexibility
- Building generic abstractions
- Using effects beyond validation (IO, Future, etc.)
// Still valuable for generic effectful operations
Kind<F, Form> result = FORM_TO_PERMISSION_NAMES.modifyF(
effectfulValidation,
form,
customApplicative
);
Real-World Example: Simplified Validation
Here's how the original example can be simplified using the new methods:
import static org.higherkindedj.optics.fluent.OpticOps.modifyAllValidated;
import org.higherkindedj.hkt.validated.Validated;
import java.util.List;
public class SimplifiedValidation {
// Same traversal as before
public static final Traversal<Form, String> FORM_TO_PERMISSION_NAMES =
FormLenses.principal().asTraversal()
.andThen(PrincipalPrisms.userLogin().asTraversal())
.andThen(UserTraversals.permissions())
.andThen(PermissionLenses.name().asTraversal());
// Simplified validation - no Applicative setup needed
public static Validated<List<String>, Form> validateFormPermissions(Form form) {
return modifyAllValidated(
FORM_TO_PERMISSION_NAMES,
name -> VALID_PERMISSIONS.contains(name)
? Validated.valid(name)
: Validated.invalid(List.of("Invalid permission: " + name)),
form
);
}
// Alternative: fail-fast validation
public static Either<String, Form> validateFormPermissionsFast(Form form) {
return modifyAllEither(
FORM_TO_PERMISSION_NAMES,
name -> VALID_PERMISSIONS.contains(name)
? Either.right(name)
: Either.left("Invalid permission: " + name),
form
);
}
}
Benefits of the Simplified Approach:
- ~60% less code: No
Applicativesetup, noKindwrapping, no narrowing - Clearer intent: Method name explicitly states the validation strategy
- Easier to learn: Uses familiar types (
Validated,Either,Maybe) - Equally powerful: Same type safety, same error accumulation, same composition
See FluentValidationExample.java for comprehensive demonstrations of all validation-aware methods, including complex real-world scenarios like order validation and bulk data import.
Further Reading
For a complete guide to validation-aware modifications including:
- Fluent builder API with method chaining
- Integration with existing validation frameworks (Jakarta Bean Validation)
- Performance optimisation techniques
- Additional real-world scenarios
See: Fluent API for Optics - Part 2.5: Validation-Aware Modifications
Previous: Profunctor Optics: Advanced Data Transformation Next: Optics Examples
Fluent API for Optics: Java-Friendly Optic Operations

- Two styles of optic operations: static methods and fluent builders
- When to use each style for maximum clarity and productivity
- How to perform common optic operations with Java-friendly syntax
- Validation-aware modifications with
Either,Maybe, andValidated - Four validation strategies for different error-handling scenarios
- Effectful modifications using type classes
- Practical patterns for real-world Java applications
Introduction: Making Optics Feel Natural in Java
While optics provide immense power for working with immutable data structures, their traditional functional programming syntax can feel foreign to Java developers. Method names like view, over, and preview don't match Java conventions, and the order of parameters can be unintuitive.
The OpticOps fluent API bridges this gap, providing two complementary styles that make optics feel natural in Java:
- Static methods - Concise, direct operations for simple cases
- Fluent builders - Method chaining with IDE-discoverable operations
Both styles operate on the same underlying optics, so you can mix and match based on what feels most natural for each situation.
The Two Styles: A Quick Comparison
Let's see both styles in action with a simple example:
@GenerateLenses
public record Person(String name, int age, String status) {}
Person person = new Person("Alice", 25, "ACTIVE");
Lens<Person, Integer> ageLens = PersonLenses.age();
Static Method Style (Concise)
// Get a value
int age = OpticOps.get(person, ageLens);
// Set a value
Person updated = OpticOps.set(person, ageLens, 30);
// Modify a value
Person modified = OpticOps.modify(person, ageLens, a -> a + 1);
Fluent Builder Style (Explicit)
// Get a value
int age = OpticOps.getting(person).through(ageLens);
// Set a value
Person updated = OpticOps.setting(person).through(ageLens, 30);
// Modify a value
Person modified = OpticOps.modifying(person).through(ageLens, a -> a + 1);
Both produce identical results. The choice is about readability and discoverability for your specific use case.
Part 1: Static Methods - Simple and Direct
Static methods provide the most concise syntax. They follow a consistent pattern: operation name, source object, optic, and optional parameters.
Getting Values
Basic Get Operations
// Get a required value (Lens or Getter)
String name = OpticOps.get(person, PersonLenses.name());
// Get an optional value (Prism or Traversal)
Optional<Address> address = OpticOps.preview(person, PersonPrisms.homeAddress());
// Get all values (Traversal or Fold)
List<String> playerNames = OpticOps.getAll(team, TeamTraversals.playerNames());
@GenerateLenses
@GenerateTraversals
public record Team(String name, List<Player> players) {}
@GenerateLenses
public record Player(String name, int score) {}
Team team = new Team("Wildcats",
List.of(
new Player("Alice", 100),
new Player("Bob", 85)
));
// Get all player names
List<String> names = OpticOps.getAll(
team,
TeamTraversals.players().andThen(PlayerLenses.name().asTraversal())
);
// Result: ["Alice", "Bob"]
Setting Values
// Set a single value (Lens)
Person updated = OpticOps.set(person, PersonLenses.age(), 30);
// Set all values (Traversal)
Team teamWithBonuses = OpticOps.setAll(
team,
TeamTraversals.players().andThen(PlayerLenses.score().asTraversal()),
100 // Everyone gets 100 points!
);
Modifying Values
The modify operations are particularly powerful because they transform existing values rather than replacing them:
// Modify a single value
Person olderPerson = OpticOps.modify(
person,
PersonLenses.age(),
age -> age + 1
);
// Modify all values
Team teamWithDoubledScores = OpticOps.modifyAll(
team,
TeamTraversals.players().andThen(PlayerLenses.score().asTraversal()),
score -> score * 2
);
Querying Data
These operations work with Fold and Traversal to query data without modification:
// Check if any element matches
boolean hasHighScorer = OpticOps.exists(
team,
TeamTraversals.players().andThen(PlayerLenses.score().asTraversal()),
score -> score > 90
);
// Check if all elements match
boolean allPassed = OpticOps.all(
team,
TeamTraversals.players().andThen(PlayerLenses.score().asTraversal()),
score -> score >= 50
);
// Count elements
int playerCount = OpticOps.count(team, TeamTraversals.players());
// Check if empty
boolean noPlayers = OpticOps.isEmpty(team, TeamTraversals.players());
// Find first matching element
Optional<Player> topScorer = OpticOps.find(
team,
TeamTraversals.players(),
player -> player.score() > 90
);
Effectful Modifications
These are the most powerful operations, allowing modifications that can fail, accumulate errors, or execute asynchronously:
// Modify with an effect (e.g., validation)
// Note: Error should be your application's error type (e.g., String, List<String>, or a custom error class)
Functor<Validated.Witness<Error>> validatedFunctor =
ValidatedApplicative.instance(ErrorSemigroup.instance());
Validated<Error, Person> result = OpticOps.modifyF(
person,
PersonLenses.age(),
age -> validateAge(age + 1), // Returns Validated<Error, Integer>
validatedFunctor
);
// Modify all with effects (e.g., async operations)
Applicative<CompletableFutureKind.Witness> cfApplicative =
CompletableFutureMonad.instance();
CompletableFuture<Team> asyncResult = OpticOps.modifyAllF(
team,
TeamTraversals.players().andThen(PlayerLenses.score().asTraversal()),
score -> fetchBonusAsync(score), // Returns CompletableFuture<Integer>
cfApplicative
).thenApply(CompletableFutureKind::narrow);
Part 2: Fluent Builders - Explicit and Discoverable
Fluent builders provide excellent IDE support through method chaining. They make the intent of your code crystal clear.
The GetBuilder Pattern
// Start with getting(source), then specify the optic
int age = OpticOps.getting(person).through(PersonLenses.age());
Optional<Address> addr = OpticOps.getting(person)
.maybeThrough(PersonPrisms.homeAddress());
List<String> names = OpticOps.getting(team)
.allThrough(TeamTraversals.playerNames());
The SetBuilder Pattern
// Start with setting(source), then specify optic and value
Person updated = OpticOps.setting(person)
.through(PersonLenses.age(), 30);
Team updatedTeam = OpticOps.setting(team)
.allThrough(
TeamTraversals.players().andThen(PlayerLenses.score().asTraversal()),
100
);
The ModifyBuilder Pattern
// Start with modifying(source), then specify optic and function
Person modified = OpticOps.modifying(person)
.through(PersonLenses.age(), age -> age + 1);
Team modifiedTeam = OpticOps.modifying(team)
.allThrough(
TeamTraversals.players().andThen(PlayerLenses.score().asTraversal()),
score -> score * 2
);
// Effectful modifications
Validated<Error, Person> result = OpticOps.modifying(person)
.throughF(
PersonLenses.age(),
age -> validateAge(age + 1),
validatedFunctor
);
The QueryBuilder Pattern
// Start with querying(source), then specify checks
boolean hasHighScorer = OpticOps.querying(team)
.anyMatch(
TeamTraversals.players().andThen(PlayerLenses.score().asTraversal()),
score -> score > 90
);
boolean allPassed = OpticOps.querying(team)
.allMatch(
TeamTraversals.players().andThen(PlayerLenses.score().asTraversal()),
score -> score >= 50
);
Optional<Player> found = OpticOps.querying(team)
.findFirst(TeamTraversals.players(), player -> player.score() > 90);
int count = OpticOps.querying(team)
.count(TeamTraversals.players());
boolean empty = OpticOps.querying(team)
.isEmpty(TeamTraversals.players());
Part 2.5: Validation-Aware Modifications
This section demonstrates Phase 2 of the optics core types integration, which brings validation-aware modifications directly into OpticOps. These methods integrate seamlessly with higher-kinded-j's core types (Either, Maybe, Validated) to provide type-safe, composable validation workflows.
Think of Validation-Aware Modifications Like...
- A quality control checkpoint 🔍 - Every modification must pass validation before being applied
- Airport security screening 🛂 - Some checks stop at the first issue (fast-track), others collect all problems (thorough inspection)
- Form validation on a website 📋 - You can show either the first error or all errors at once
- Code review process ✅ - Accumulate all feedback rather than stopping at the first comment
The Challenge: Validation During Updates
Traditional optic operations assume modifications always succeed. But in real applications, updates often need validation:
// ❌ Problem: No validation during modification
Person updated = OpticOps.modify(person, PersonLenses.age(), age -> age + 1);
// What if the new age is invalid? No way to handle errors!
// ❌ Problem: Manual validation is verbose and error-prone
int currentAge = OpticOps.get(person, PersonLenses.age());
if (currentAge + 1 >= 0 && currentAge + 1 <= 120) {
person = OpticOps.set(person, PersonLenses.age(), currentAge + 1);
} else {
// Handle error... but how do we return both success and failure?
}
Validation-aware modifications solve this by integrating validation directly into the optic operation, returning a result type that represents either success or failure.
The Solution: Four Validation Strategies
OpticOps provides four complementary validation methods, each suited to different scenarios:
| Method | Core Type | Behaviour | Best For |
|---|---|---|---|
modifyEither | Either<E, S> | Short-circuit on first error | Sequential validation, fail-fast workflows |
modifyMaybe | Maybe<S> | Success or nothing (no error details) | Optional enrichment, silent failure |
modifyAllValidated | Validated<List<E>, S> | Accumulate ALL errors | Form validation, comprehensive feedback |
modifyAllEither | Either<E, S> | Stop at first error in collection | Performance-critical batch validation |
// Same validation logic, different error handling strategies
Order order = new Order("ORD-123", List.of(
new BigDecimal("-10.00"), // Invalid: negative
new BigDecimal("15000.00") // Invalid: too high
));
// Strategy 1: Either - stops at FIRST error
Either<String, Order> result1 = OpticOps.modifyAllEither(
order, orderPricesTraversal, price -> validatePrice(price)
);
// Result: Left("Price cannot be negative: -10.00")
// Strategy 2: Validated - collects ALL errors
Validated<List<String>, Order> result2 = OpticOps.modifyAllValidated(
order, orderPricesTraversal, price -> validatePrice(price)
);
// Result: Invalid(["Price cannot be negative: -10.00",
// "Price exceeds maximum: 15000.00"])
Static Method Style: Validation Operations
Single-Field Validation with modifyEither
Perfect for validating and modifying a single field where you want to fail fast with detailed error messages.
@GenerateLenses
public record User(String username, String email, int age) {}
// Validate email format
Either<String, User> result = OpticOps.modifyEither(
user,
UserLenses.email(),
email -> {
if (email.matches("^[A-Za-z0-9+_.-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,}$")) {
return Either.right(email); // Valid
} else {
return Either.left("Invalid email format: " + email); // Error
}
}
);
// Handle the result
result.fold(
error -> {
log.error("Validation failed: {}", error);
return null;
},
validUser -> {
log.info("User updated: {}", validUser.email());
return null;
}
);
Optional Validation with modifyMaybe
Useful when validation failure shouldn't produce error messages—either it works or it doesn't.
// Trim and validate bio (silent failure if too long)
Maybe<User> result = OpticOps.modifyMaybe(
user,
UserLenses.bio(),
bio -> {
String trimmed = bio.trim();
if (trimmed.length() <= 500) {
return Maybe.just(trimmed); // Success
} else {
return Maybe.nothing(); // Too long, fail silently
}
}
);
// Check if validation succeeded
if (result.isJust()) {
User validUser = result.get();
// Proceed with valid user
} else {
// Validation failed, use fallback logic
}
Multi-Field Validation with Error Accumulation
The most powerful option: validate multiple fields and collect all validation errors, not just the first one.
@GenerateTraversals
public record Order(String orderId, List<BigDecimal> itemPrices) {}
// Validate ALL prices and accumulate errors
Validated<List<String>, Order> result = OpticOps.modifyAllValidated(
order,
orderPricesTraversal,
price -> {
if (price.compareTo(BigDecimal.ZERO) < 0) {
return Validated.invalid("Price cannot be negative: " + price);
} else if (price.compareTo(new BigDecimal("10000")) > 0) {
return Validated.invalid("Price exceeds maximum: " + price);
} else {
return Validated.valid(price); // Valid price
}
}
);
// Handle accumulated errors
result.fold(
errors -> {
System.out.println("Validation failed with " + errors.size() + " errors:");
errors.forEach(error -> System.out.println(" - " + error));
return null;
},
validOrder -> {
System.out.println("All prices validated successfully!");
return null;
}
);
Multi-Field Validation with Short-Circuiting
When you have many fields to validate but want to stop at the first error (better performance, less detailed feedback):
// Validate all prices, stop at FIRST error
Either<String, Order> result = OpticOps.modifyAllEither(
order,
orderPricesTraversal,
price -> validatePrice(price) // Returns Either<String, BigDecimal>
);
// Only the first error is reported
result.fold(
firstError -> System.out.println("Failed: " + firstError),
validOrder -> System.out.println("Success!")
);
Fluent Builder Style: ModifyingWithValidation
The fluent API provides a dedicated builder for validation-aware modifications, making the intent even clearer:
// Start with modifyingWithValidation(source), then choose validation strategy
// Single field with Either
Either<String, User> result1 = OpticOps.modifyingWithValidation(user)
.throughEither(UserLenses.email(), email -> validateEmail(email));
// Single field with Maybe
Maybe<User> result2 = OpticOps.modifyingWithValidation(user)
.throughMaybe(UserLenses.bio(), bio -> validateBio(bio));
// All fields with Validated (error accumulation)
Validated<List<String>, Order> result3 = OpticOps.modifyingWithValidation(order)
.allThroughValidated(orderPricesTraversal, price -> validatePrice(price));
// All fields with Either (short-circuit)
Either<String, Order> result4 = OpticOps.modifyingWithValidation(order)
.allThroughEither(orderPricesTraversal, price -> validatePrice(price));
Real-World Scenario: User Registration
Let's see how to use validation-aware modifications for a complete user registration workflow:
@GenerateLenses
public record UserRegistration(String username, String email, int age, String bio) {}
// Scenario: Sequential validation (stop at first error)
Either<String, UserRegistration> validateRegistration(UserRegistration form) {
return OpticOps.modifyEither(form, UserLenses.username(), this::validateUsername)
.flatMap(user -> OpticOps.modifyEither(user, UserLenses.email(), this::validateEmail))
.flatMap(user -> OpticOps.modifyEither(user, UserLenses.age(), this::validateAge))
.flatMap(user -> OpticOps.modifyEither(user, UserLenses.bio(), this::validateBio));
}
private Either<String, String> validateUsername(String username) {
if (username.length() < 3) {
return Either.left("Username must be at least 3 characters");
}
if (username.length() > 20) {
return Either.left("Username must not exceed 20 characters");
}
if (!username.matches("^[a-zA-Z0-9_]+$")) {
return Either.left("Username can only contain letters, numbers, and underscores");
}
return Either.right(username);
}
// Usage
validateRegistration(formData).fold(
error -> {
System.out.println("Registration failed: " + error);
// Show error to user, stop processing
return null;
},
validForm -> {
System.out.println("Registration successful!");
// Proceed with user creation
return null;
}
);
Real-World Scenario: Bulk Data Import
When importing data, you often want to collect all validation errors to give comprehensive feedback:
@GenerateTraversals
public record DataImport(List<String> emailAddresses, String importedBy) {}
// Validate all emails, accumulate ALL errors
Validated<List<String>, DataImport> validateImport(DataImport importData) {
return OpticOps.modifyingWithValidation(importData)
.allThroughValidated(
DataImportTraversals.emailAddresses(),
email -> {
if (!email.matches("^[A-Za-z0-9+_.-]+@[A-Za-z0-9.-]+\\.[A-Za-z]{2,}$")) {
return Validated.invalid("Invalid email: " + email);
} else {
return Validated.valid(email.toLowerCase().trim()); // Normalise
}
}
);
}
// Usage
validateImport(importBatch).fold(
errors -> {
System.out.println("Import failed with " + errors.size() + " invalid emails:");
errors.forEach(error -> System.out.println(" - " + error));
// User can fix ALL errors at once
return null;
},
validImport -> {
System.out.println("Import successful! " +
validImport.emailAddresses().size() +
" emails validated.");
return null;
}
);
When to Use Each Validation Strategy
Use modifyEither When:
✅ Sequential workflows where you want to stop at the first error
// Login validation - stop at first failure
OpticOps.modifyEither(credentials, CredentialsLenses.username(), this::validateUsername)
.flatMap(c -> OpticOps.modifyEither(c, CredentialsLenses.password(), this::checkPassword))
✅ Single-field validation with detailed error messages
✅ Early exit is beneficial (no point continuing if a critical field is invalid)
Use modifyMaybe When:
✅ Optional enrichment where failure is acceptable
// Try to geocode address, but it's okay if it fails
OpticOps.modifyMaybe(order, OrderLenses.address(), addr -> geocodeAddress(addr))
✅ Error details aren't needed (just success/failure)
✅ Silent failures are acceptable
Use modifyAllValidated When:
✅ Form validation where users need to see all errors at once
// Show all validation errors on a registration form
OpticOps.modifyAllValidated(form, formFieldsTraversal, this::validateField)
✅ Comprehensive feedback is important
✅ User experience matters (fixing all errors in one go)
Use modifyAllEither When:
✅ Performance is critical and you have many fields to validate
✅ First error is sufficient for debugging or logging
✅ Resource-intensive validation where stopping early saves time
Comparison with Traditional modifyF
The validation methods simplify common patterns that previously required manual Applicative wiring:
Before (using modifyF):
// Manual applicative construction with explicit error type conversion
Applicative<Validated.Witness<List<String>>> app =
ValidatedApplicative.instance(ListSemigroup.instance());
Validated<List<String>, Order> result = OpticOps.modifyAllF(
order,
orderPricesTraversal,
price -> {
Validated<String, BigDecimal> validatedPrice = validatePrice(price);
// Must convert error type from String to List<String>
return ValidatedKindHelper.VALIDATED.widen(
validatedPrice.bimap(List::of, Function.identity())
);
},
app
).narrow();
After (using modifyAllValidated):
// Clean, concise, and clear intent
Validated<List<String>, Order> result = OpticOps.modifyAllValidated(
order,
orderPricesTraversal,
price -> validatePrice(price)
);
The traditional modifyF methods are still valuable for:
- Custom effect types beyond
Either,Maybe, andValidated - Advanced applicative scenarios with custom combinators
- Asynchronous validation (e.g.,
CompletableFuture) - Integration with third-party effect systems
For standard validation scenarios, the dedicated methods are clearer and more concise.
Performance Considerations
Eithershort-circuiting: Stops at first error, potentially faster for large collectionsValidatedaccumulation: Checks all elements, more work but better UXMaybe: Minimal overhead, just success/nothing- Object allocation: All methods create new result objects (standard immutable pattern)
For performance-critical code with large collections:
- Use
modifyAllEitherif first-error is acceptable - Use
modifyAllValidatedif comprehensive errors are required - Consider pre-filtering with
StreamAPI before validation - Cache compiled validators (e.g., compiled regex patterns)
Integration with Existing Validation
Validation-aware modifications work seamlessly with existing validation libraries:
// Jakarta Bean Validation integration
import jakarta.validation.Validator;
import jakarta.validation.ConstraintViolation;
Either<List<String>, User> validateWithJakarta(User user, Validator validator) {
return OpticOps.modifyEither(
user,
UserLenses.email(),
email -> {
Set<ConstraintViolation<String>> violations =
validator.validate(email);
if (violations.isEmpty()) {
return Either.right(email);
} else {
return Either.left(
violations.stream()
.map(ConstraintViolation::getMessage)
.collect(Collectors.toList())
);
}
}
);
}
Part 3: Real-World Examples
Example 1: E-Commerce Order Processing
@GenerateLenses
@GenerateTraversals
public record Order(String orderId,
OrderStatus status,
List<OrderItem> items,
ShippingAddress address) {}
@GenerateLenses
public record OrderItem(String productId, int quantity, BigDecimal price) {}
@GenerateLenses
public record ShippingAddress(String street, String city, String postCode) {}
// Scenario: Apply bulk discount and update shipping
Order processOrder(Order order, BigDecimal discountPercent) {
// Apply discount using fluent API
Order discountedOrder = OpticOps.modifying(order)
.allThrough(
OrderTraversals.items().andThen(OrderItemLenses.price().asTraversal()),
price -> price.multiply(BigDecimal.ONE.subtract(discountPercent))
);
// Update status using static method
return OpticOps.set(
discountedOrder,
OrderLenses.status(),
OrderStatus.PROCESSING
);
}
Example 2: Validation with Error Accumulation
// Using Validated to accumulate all validation errors
Validated<List<String>, Order> validateOrder(Order order) {
Applicative<Validated.Witness<List<String>>> applicative =
ValidatedApplicative.instance(ListSemigroup.instance());
// Validate all item quantities
return OpticOps.modifyAllF(
order,
OrderTraversals.items().andThen(OrderItemLenses.quantity().asTraversal()),
qty -> {
if (qty > 0 && qty <= 1000) {
return Validated.valid(qty);
} else {
return Validated.invalid(List.of(
"Quantity must be between 1 and 1000, got: " + qty
));
}
},
applicative
).narrow();
}
Example 3: Async Database Updates
// Using CompletableFuture for async operations
CompletableFuture<Team> updatePlayerScoresAsync(
Team team,
Function<Player, CompletableFuture<Integer>> fetchNewScore
) {
Applicative<CompletableFutureKind.Witness> cfApplicative =
CompletableFutureMonad.instance();
return OpticOps.modifyAllF(
team,
TeamTraversals.players(),
player -> fetchNewScore.apply(player)
.thenApply(newScore ->
OpticOps.set(player, PlayerLenses.score(), newScore)
)
.thenApply(CompletableFutureKind::of),
cfApplicative
).thenApply(kind -> CompletableFutureKind.narrow(kind).join());
}
When to Use Each Style
Use Static Methods When:
✅ Performing simple, one-off operations
// Clear and concise
String name = OpticOps.get(person, PersonLenses.name());
✅ Chaining is not needed
// Direct transformation
Person older = OpticOps.modify(person, PersonLenses.age(), a -> a + 1);
✅ Performance is critical (slightly less object allocation)
Use Fluent Builders When:
✅ Building complex workflows
import static java.util.stream.Collectors.toList;
// Clear intent at each step
return OpticOps.getting(order)
.allThrough(OrderTraversals.items())
.stream()
.filter(item -> item.quantity() > 10)
.map(OrderItem::productId)
.collect(toList());
✅ IDE autocomplete is important (great for discovery)
✅ Code reviews matter (explicit intent)
✅ Teaching or documentation (self-explanatory)
Common Patterns and Idioms
Pattern 1: Pipeline Transformations
// Sequential transformations for multi-step pipeline
// Note: Result and Data should be your application's domain types with appropriate lenses
Result processData(Data input) {
Data afterStage1 = OpticOps.modifying(input)
.through(DataLenses.stage1(), this::transformStage1);
Data afterStage2 = OpticOps.modifying(afterStage1)
.through(DataLenses.stage2(), this::transformStage2);
return OpticOps.modifying(afterStage2)
.through(DataLenses.stage3(), this::transformStage3);
}
Pattern 2: Conditional Updates
// Static style for simple conditionals
Person updateIfAdult(Person person) {
int age = OpticOps.get(person, PersonLenses.age());
return age >= 18
? OpticOps.set(person, PersonLenses.status(), "ADULT")
: person;
}
Pattern 3: Bulk Operations with Filtering
// Combine both styles for clarity
Team updateTopPerformers(Team team, int threshold) {
// Use fluent for query
List<Player> topPerformers = OpticOps.querying(team)
.allThrough(TeamTraversals.players())
.stream()
.filter(p -> p.score() >= threshold)
.toList();
// Use static for transformation
return OpticOps.modifyAll(
team,
TeamTraversals.players(),
player -> topPerformers.contains(player)
? OpticOps.set(player, PlayerLenses.status(), "STAR")
: player
);
}
Performance Considerations
Object Allocation
- Static methods: Minimal allocation (just the result)
- Fluent builders: Create intermediate builder objects
- Impact: Negligible for most applications; avoid in tight loops
Optic Composition
Both styles benefit from composing optics once and reusing them:
// ✅ Good: Compose once, use many times
Lens<Order, BigDecimal> orderToTotalPrice =
OrderTraversals.items()
.andThen(OrderItemLenses.price().asTraversal())
.andThen(someAggregationLens);
orders.stream()
.map(order -> OpticOps.getAll(order, orderToTotalPrice))
.collect(toList());
// ❌ Avoid: Recomposing in loop
orders.stream()
.map(order -> OpticOps.getAll(
order,
OrderTraversals.items()
.andThen(OrderItemLenses.price().asTraversal()) // Recomposed each time!
))
.collect(toList());
Integration with Existing Java Code
Working with Streams
// Optics integrate naturally with Stream API
List<String> highScorerNames = OpticOps.getting(team)
.allThrough(TeamTraversals.players())
.stream()
.filter(p -> p.score() > 90)
.map(p -> OpticOps.get(p, PlayerLenses.name()))
.collect(toList());
Working with Optional
// Optics and Optional work together
Optional<Person> maybePerson = findPerson(id);
Optional<Integer> age = maybePerson
.map(p -> OpticOps.get(p, PersonLenses.age()));
Person updated = maybePerson
.map(p -> OpticOps.modify(p, PersonLenses.age(), a -> a + 1))
.orElse(new Person("Default", 0, "UNKNOWN"));
Common Pitfalls
❌ Don't: Call get then set
// Inefficient - two traversals
int age = OpticOps.get(person, PersonLenses.age());
Person updated = OpticOps.set(person, PersonLenses.age(), age + 1);
✅ Do: Use modify
// Efficient - single traversal
Person updated = OpticOps.modify(person, PersonLenses.age(), a -> a + 1);
❌ Don't: Recompose optics unnecessarily
// Bad - composing in a loop
for (Order order : orders) {
var itemPrices = OrderTraversals.items()
.andThen(OrderItemLenses.price().asTraversal()); // Composed each iteration!
process(OpticOps.getAll(order, itemPrices));
}
✅ Do: Compose once, reuse
// Good - compose outside loop
var itemPrices = OrderTraversals.items()
.andThen(OrderItemLenses.price().asTraversal());
for (Order order : orders) {
process(OpticOps.getAll(order, itemPrices));
}
Further Reading
- Fluent Interfaces: Martin Fowler's article on designing fluent APIs
- Builder Pattern: Effective Java, 3rd Edition by Joshua Bloch
- Method Chaining: Patterns of Enterprise Application Architecture
- Lens Tutorial: Haskell lens tutorial for deeper theoretical understanding
Next Steps:
- Free Monad DSL for Optics - Build composable programs
- Optic Interpreters - Multiple execution strategies
- Advanced Patterns - Complex real-world scenarios
Free Monad DSL: Composable Optic Programs

- What Free monads are and why they're powerful for optics
- How to build composable optic programs step by step
- Separating program description from execution
- Using conditional logic and branching in programs
- Real-world scenarios: audit trails, validation, and testing
- Creating reusable program fragments
Introduction: Beyond Immediate Execution
When you use optics directly, they execute immediately. You read a value, transform a field, update a structure—all happens right away. This direct execution is perfect for simple cases, but what if you need more?
Consider these real-world requirements:
- Audit trails: Record every data change for compliance
- Validation: Check all constraints before making any changes
- Testing: Verify your logic without touching real data
- Optimisation: Analyse and fuse multiple operations for efficiency
- Dry-runs: See what would change without actually changing it
This is where the Free monad DSL comes in. It lets you describe a sequence of optic operations as data, then interpret that description in different ways.
A Free monad program is like a recipe. Writing the recipe doesn't cook the meal—it just describes what to do. You can review the recipe, validate it, translate it, or follow it to cook. The Free monad DSL gives you that same power with optic operations.
Part 1: Understanding Free Monads (Gently)
What Is a Free Monad?
A Free monad is a way to build a program as data. Instead of executing operations immediately, you construct a data structure that describes what operations to perform. Later, you choose how to execute (interpret) that structure.
Think of it like this:
// Direct execution (happens immediately)
Person updated = PersonLenses.age().set(30, person);
// Free monad (just builds a description)
Free<OpticOpKind.Witness, Person> program =
OpticPrograms.set(person, PersonLenses.age(), 30);
// Nothing happened yet! We just described what to do.
// Now we choose how to interpret it
Person result = OpticInterpreters.direct().run(program);
// NOW it executed
Why Is This Useful?
By separating description from execution, you can:
- Review the program before running it
- Validate all operations without executing them
- Log every operation for audit trails
- Test the logic with mock data
- Transform the program (optimise, translate, etc.)
For optics specifically, this means you can build complex data transformation workflows and then choose how to execute them based on your needs.
Part 2: Building Your First Optic Program
Simple Programs: Get, Set, Modify
Let's start with the basics:
@GenerateLenses
public record Person(String name, int age, String status) {}
Person person = new Person("Alice", 25, "ACTIVE");
// Build a program that gets the age
Free<OpticOpKind.Witness, Integer> getProgram =
OpticPrograms.get(person, PersonLenses.age());
// Build a program that sets the age
Free<OpticOpKind.Witness, Person> setProgram =
OpticPrograms.set(person, PersonLenses.age(), 30);
// Build a program that modifies the age
Free<OpticOpKind.Witness, Person> modifyProgram =
OpticPrograms.modify(person, PersonLenses.age(), age -> age + 1);
At this point, nothing has executed. We've just built descriptions of operations. To actually run them:
// Execute with direct interpreter
DirectOpticInterpreter interpreter = OpticInterpreters.direct();
Integer age = interpreter.run(getProgram); // 25
Person updated = interpreter.run(setProgram); // age is now 30
Person modified = interpreter.run(modifyProgram); // age is now 26
Composing Programs: The Power of flatMap
The real power emerges when you compose multiple operations. The flatMap method lets you sequence operations where each step can depend on previous results:
// Program: Get the age, then if they're an adult, increment it
Free<OpticOpKind.Witness, Person> adultBirthdayProgram =
OpticPrograms.get(person, PersonLenses.age())
.flatMap(age -> {
if (age >= 18) {
return OpticPrograms.modify(
person,
PersonLenses.age(),
a -> a + 1
);
} else {
// Return unchanged person
return OpticPrograms.pure(person);
}
});
// Execute it
Person result = OpticInterpreters.direct().run(adultBirthdayProgram);
Let's break down what's happening:
getcreates a program that will retrieve the ageflatMapsays "once you have the age, use it to decide what to do next"- Inside
flatMap, we make a decision based on the age value - We return a new program (either
modifyorpure) - The interpreter executes the composed program step by step
Multi-Step Programmes: Complex Workflows
You can chain multiple flatMap calls to build sophisticated workflows:
@GenerateLenses
public record Employee(String name, int salary, EmployeeStatus status) {}
enum EmployeeStatus { JUNIOR, SENIOR, RETIRED }
// Program: Annual review and potential promotion
Free<OpticOpKind.Witness, Employee> annualReviewProgram(Employee employee) {
return OpticPrograms.get(employee, EmployeeLenses.salary())
.flatMap(currentSalary -> {
// Step 1: Give a 10% raise
int newSalary = currentSalary + (currentSalary / 10);
return OpticPrograms.set(employee, EmployeeLenses.salary(), newSalary);
})
.flatMap(raisedEmployee ->
// Step 2: Check if salary justifies promotion
OpticPrograms.get(raisedEmployee, EmployeeLenses.salary())
.flatMap(salary -> {
if (salary > 100_000) {
return OpticPrograms.set(
raisedEmployee,
EmployeeLenses.status(),
EmployeeStatus.SENIOR
);
} else {
return OpticPrograms.pure(raisedEmployee);
}
})
);
}
// Execute for an employee
Employee alice = new Employee("Alice", 95_000, EmployeeStatus.JUNIOR);
Free<OpticOpKind.Witness, Employee> program = annualReviewProgram(alice);
Employee promoted = OpticInterpreters.direct().run(program);
// Result: Employee("Alice", 104_500, SENIOR)
Part 3: Working with Collections (Traversals and Folds)
The DSL works beautifully with traversals for batch operations:
@GenerateLenses
@GenerateTraversals
public record Team(String name, List<Player> players) {}
@GenerateLenses
public record Player(String name, int score) {}
Team team = new Team("Wildcats",
List.of(
new Player("Alice", 80),
new Player("Bob", 90)
));
// Program: Double all scores and check if everyone passes
Free<OpticOpKind.Witness, Boolean> scoreUpdateProgram =
OpticPrograms.modifyAll(
team,
TeamTraversals.players().andThen(PlayerLenses.score().asTraversal()),
score -> score * 2
)
.flatMap(updatedTeam ->
// Now check if all players have passing scores
OpticPrograms.all(
updatedTeam,
TeamTraversals.players().andThen(PlayerLenses.score().asTraversal()),
score -> score >= 100
)
);
// Execute
Boolean allPass = OpticInterpreters.direct().run(scoreUpdateProgram);
// Result: true (Alice: 160, Bob: 180)
Querying with Programmes
// Program: Find all high scorers
Free<OpticOpKind.Witness, List<Player>> findHighScorers =
OpticPrograms.getAll(team, TeamTraversals.players())
.flatMap(players -> {
List<Player> highScorers = players.stream()
.filter(p -> p.score() > 85)
.toList();
return OpticPrograms.pure(highScorers);
});
// Execute
List<Player> topPlayers = OpticInterpreters.direct().run(findHighScorers);
Part 4: Real-World Scenarios
Scenario 1: Data Migration with Validation
@GenerateLenses
public record UserV1(String username, String email) {}
@GenerateLenses
public record UserV2(String username, String email, boolean verified) {}
// Note: Either is from higher-kinded-j (org.higherkindedj.hkt.either.Either)
// It represents a value that can be either a Left (error) or Right (success)
// Program: Migrate user with email validation
Free<OpticOpKind.Witness, Either<String, UserV2>> migrateUser(UserV1 oldUser) {
return OpticPrograms.get(oldUser, UserV1Lenses.email())
.flatMap(email -> {
if (email.contains("@") && email.contains(".")) {
// Valid email - proceed with migration
UserV2 newUser = new UserV2(
oldUser.username(),
email,
false // Will be verified later
);
return OpticPrograms.pure(Either.right(newUser));
} else {
// Invalid email - fail migration
return OpticPrograms.pure(Either.left(
"Invalid email: " + email
));
}
});
}
// Execute migration
Free<OpticOpKind.Witness, Either<String, UserV2>> program =
migrateUser(new UserV1("alice", "alice@example.com"));
Either<String, UserV2> result = OpticInterpreters.direct().run(program);
By building the migration as a program, you can:
- Validate the entire migration plan before executing
- Log every transformation for audit purposes
- Test the migration logic without touching real data
- Roll back if any step fails
Scenario 2: Audit Trail for Financial Transactions
@GenerateLenses
public record Account(String accountId, BigDecimal balance) {}
@GenerateLenses
public record Transaction(Account from, Account to, BigDecimal amount) {}
// Program: Transfer money between accounts
Free<OpticOpKind.Witness, Transaction> transferProgram(
Transaction transaction
) {
return OpticPrograms.get(transaction, TransactionLenses.amount())
.flatMap(amount ->
// Deduct from source account
OpticPrograms.modify(
transaction,
TransactionLenses.from().andThen(AccountLenses.balance()),
balance -> balance.subtract(amount)
)
)
.flatMap(txn ->
// Add to destination account
OpticPrograms.modify(
txn,
TransactionLenses.to().andThen(AccountLenses.balance()),
balance -> balance.add(txn.amount())
)
);
}
// Execute with logging for audit trail
Account acc1 = new Account("ACC001", new BigDecimal("1000.00"));
Account acc2 = new Account("ACC002", new BigDecimal("500.00"));
Transaction txn = new Transaction(acc1, acc2, new BigDecimal("100.00"));
Free<OpticOpKind.Witness, Transaction> program = transferProgram(txn);
// Use logging interpreter to record every operation
LoggingOpticInterpreter logger = OpticInterpreters.logging();
Transaction result = logger.run(program);
// Review audit trail
logger.getLog().forEach(System.out::println);
/* Output:
GET: TransactionLenses.amount() -> 100.00
MODIFY: TransactionLenses.from().andThen(AccountLenses.balance()) from 1000.00 to 900.00
MODIFY: TransactionLenses.to().andThen(AccountLenses.balance()) from 500.00 to 600.00
*/
Scenario 3: Dry-Run Validation Before Production
@GenerateLenses
@GenerateTraversals
public record ProductCatalogue(List<Product> products) {}
@GenerateLenses
public record Product(String id, BigDecimal price, int stock) {}
// Program: Bulk price update
Free<OpticOpKind.Witness, ProductCatalogue> bulkPriceUpdate(
ProductCatalogue catalogue,
BigDecimal markup
) {
return OpticPrograms.modifyAll(
catalogue,
ProductCatalogueTraversals.products()
.andThen(ProductLenses.price().asTraversal()),
price -> price.multiply(BigDecimal.ONE.add(markup))
);
}
// First, validate without executing
ProductCatalogue catalogue = new ProductCatalogue(
List.of(
new Product("P001", new BigDecimal("99.99"), 10),
new Product("P002", new BigDecimal("49.99"), 5)
)
);
Free<OpticOpKind.Witness, ProductCatalogue> program =
bulkPriceUpdate(catalogue, new BigDecimal("0.10")); // 10% markup
// Validate first
ValidationOpticInterpreter validator = OpticInterpreters.validating();
ValidationOpticInterpreter.ValidationResult validation =
validator.validate(program);
if (validation.isValid()) {
// All good - now execute for real
ProductCatalogue updated = OpticInterpreters.direct().run(program);
System.out.println("Price update successful!");
} else {
// Something wrong - review errors
validation.errors().forEach(System.err::println);
validation.warnings().forEach(System.out::println);
}
Part 5: Advanced Patterns
Pattern 1: Reusable Program Fragments
You can build libraries of reusable program fragments:
// Library of common operations
public class PersonPrograms {
public static Free<OpticOpKind.Witness, Person> celebrateBirthday(
Person person
) {
return OpticPrograms.modify(
person,
PersonLenses.age(),
age -> age + 1
);
}
public static Free<OpticOpKind.Witness, Person> promoteIfEligible(
Person person
) {
return OpticPrograms.get(person, PersonLenses.age())
.flatMap(age -> {
if (age >= 30) {
return OpticPrograms.set(
person,
PersonLenses.status(),
"SENIOR"
);
} else {
return OpticPrograms.pure(person);
}
});
}
// Combine operations
public static Free<OpticOpKind.Witness, Person> annualUpdate(
Person person
) {
return celebrateBirthday(person)
.flatMap(PersonPrograms::promoteIfEligible);
}
}
// Use them
Person alice = new Person("Alice", 29, "JUNIOR");
Free<OpticOpKind.Witness, Person> program = PersonPrograms.annualUpdate(alice);
Person updated = OpticInterpreters.direct().run(program);
Pattern 2: Conditional Branching
enum PerformanceRating { EXCELLENT, GOOD, SATISFACTORY, POOR }
// Program with complex branching logic
Free<OpticOpKind.Witness, Employee> processPerformanceReview(
Employee employee,
PerformanceRating rating
) {
return switch (rating) {
case EXCELLENT -> OpticPrograms.modify(
employee,
EmployeeLenses.salary(),
salary -> salary + (salary / 5) // 20% raise
).flatMap(emp ->
OpticPrograms.set(emp, EmployeeLenses.status(), EmployeeStatus.SENIOR)
);
case GOOD -> OpticPrograms.modify(
employee,
EmployeeLenses.salary(),
salary -> salary + (salary / 10) // 10% raise
);
case SATISFACTORY -> OpticPrograms.pure(employee); // No change
case POOR -> OpticPrograms.set(
employee,
EmployeeLenses.status(),
EmployeeStatus.PROBATION
);
};
}
Pattern 3: Accumulating Results
// Note: Tuple and Tuple2 are from higher-kinded-j (org.higherkindedj.hkt.tuple.Tuple, Tuple2)
// Tuple.of() creates a Tuple2 instance to pair two values together
// Program that accumulates statistics while processing
record ProcessingStats(int processed, int modified, int skipped) {}
Free<OpticOpKind.Witness, Tuple2<Team, ProcessingStats>> processTeamWithStats(
Team team
) {
// This is simplified - in practice you'd thread stats through flatMaps
return OpticPrograms.getAll(team, TeamTraversals.players())
.flatMap(players -> {
int processed = players.size();
int modified = (int) players.stream()
.filter(p -> p.score() < 50)
.count();
int skipped = processed - modified;
return OpticPrograms.modifyAll(
team,
TeamTraversals.players(),
player -> player.score() < 50
? OpticOps.set(player, PlayerLenses.score(), 50)
: player
).map(updatedTeam ->
Tuple.of(
updatedTeam,
new ProcessingStats(processed, modified, skipped)
)
);
});
}
Part 6: Comparison with Direct Execution
When to Use Free Monad DSL
✅ Use Free Monad DSL when you need:
- Audit trails and logging
- Validation before execution
- Testing complex logic
- Multiple execution strategies
- Optimisation opportunities
- Dry-run capabilities
When to Use Direct Execution
✅ Use Direct Execution (Fluent API) when:
- Simple, straightforward operations
- No need for introspection
- Performance is critical
- The workflow is stable and well-understood
Side-by-Side Comparison
// Direct execution (immediate)
Person result = OpticOps.modify(
person,
PersonLenses.age(),
age -> age + 1
);
// Free monad (deferred)
Free<OpticOpKind.Witness, Person> program =
OpticPrograms.modify(
person,
PersonLenses.age(),
age -> age + 1
);
Person result = OpticInterpreters.direct().run(program);
The Free monad version requires more code, but gives you the power to:
// Log it
LoggingOpticInterpreter logger = OpticInterpreters.logging();
Person result = logger.run(program);
logger.getLog().forEach(System.out::println);
// Validate it
ValidationOpticInterpreter validator = OpticInterpreters.validating();
ValidationResult validation = validator.validate(program);
// Test it with mocks
MockOpticInterpreter mock = new MockOpticInterpreter();
Person mockResult = mock.run(program);
Common Pitfalls
❌ Don't: Forget that programs are immutable
// Wrong - trying to "modify" a program
Free<OpticOpKind.Witness, Person> program = OpticPrograms.get(person, PersonLenses.age());
program.flatMap(age -> ...); // This returns a NEW program!
// The original program is unchanged
✅ Do: Assign the result of flatMap
// Correct - capture the new program
Free<OpticOpKind.Witness, Person> program =
OpticPrograms.get(person, PersonLenses.age())
.flatMap(age -> OpticPrograms.modify(person, PersonLenses.age(), a -> a + 1));
❌ Don't: Mix side effects in program construction
// Wrong - side effect during construction
Free<OpticOpKind.Witness, Person> program =
OpticPrograms.get(person, PersonLenses.age())
.flatMap(age -> {
System.out.println("Age: " + age); // Side effect!
return OpticPrograms.pure(person);
});
✅ Do: Keep program construction pure
// Correct - side effects only in interpreters
Free<OpticOpKind.Witness, Person> program =
OpticPrograms.get(person, PersonLenses.age())
.flatMap(age -> OpticPrograms.pure(person));
// Side effects happen during interpretation
LoggingOpticInterpreter logger = OpticInterpreters.logging();
Person result = logger.run(program);
logger.getLog().forEach(System.out::println); // Side effect here is fine
Further Reading
- Free Monads Explained: Why Free Monads Matter by Gabriel Gonzalez
- Interpreter Pattern: Design Patterns: Elements of Reusable Object-Oriented Software
- Tagless Final vs Free: Typed Tagless Final Interpreters
- Railway-Oriented Programming: Railway Oriented Programming by Scott Wlaschin
- Separation of Concerns: On the Criteria To Be Used in Decomposing Systems into Modules by David Parnas
Next Steps:
- Optic Interpreters - Deep dive into execution strategies
- Fluent API for Optics - Direct execution patterns
- Advanced Patterns - Complex real-world scenarios
Optic Interpreters: Multiple Execution Strategies

- How the Interpreter pattern separates description from execution
- The three built-in interpreters: Direct, Logging, and Validation
- When to use each interpreter effectively
- How to create custom interpreters for specific needs
- Combining interpreters for powerful workflows
- Real-world applications: audit trails, testing, and optimisation
Introduction: The Power of Interpretation
In the Free Monad DSL guide, we learnt how to build optic operations as programmes—data structures that describe what to do, rather than doing it immediately. But a description alone is useless without execution. That's where interpreters come in.
An interpreter takes a programme and executes it in a specific way. By providing different interpreters, you can run the same programme with completely different behaviours:
- DirectOpticInterpreter: Executes operations immediately (production use)
- LoggingOpticInterpreter: Records every operation for audit trails
- ValidationOpticInterpreter: Checks constraints without modifying data
- Custom interpreters: Performance profiling, testing, mocking, and more
This separation of concerns—what to do vs how to do it—is the essence of the Interpreter pattern and the key to the Free monad's flexibility.
Write your business logic once as a programme. Execute it in multiple ways: validate it in tests, log it in production, mock it during development, and optimise it for performance—all without changing the business logic itself.
Part 1: The Interpreter Pattern Explained
From Design Patterns to Functional Programming
The Interpreter pattern, described in the Gang of Four's Design Patterns, suggests representing operations as objects in an abstract syntax tree (AST), then traversing that tree to execute them. The Free monad is essentially a functional programming implementation of this pattern.
// Our "AST" - a programme built from operations
Free<OpticOpKind.Witness, Person> program =
OpticPrograms.get(person, PersonLenses.age())
.flatMap(age ->
OpticPrograms.modify(person, PersonLenses.age(), a -> a + 1)
);
// Our "interpreter" - executes the AST
DirectOpticInterpreter interpreter = OpticInterpreters.direct();
Person result = interpreter.run(program);
Why Multiple Interpreters?
Different situations require different execution strategies:
| Situation | Interpreter | Why |
|---|---|---|
| Production execution | Direct | Fast, straightforward |
| Compliance & auditing | Logging | Records every change |
| Pre-flight checks | Validation | Verifies without executing |
| Unit testing | Mock/Custom | No real data needed |
| Performance tuning | Profiling/Custom | Measures execution time |
| Dry-run simulations | Validation | See what would happen |
Part 2: The Direct Interpreter
The DirectOpticInterpreter is the simplest interpreter—it executes optic operations immediately, exactly as you'd expect.
Basic Usage
@GenerateLenses
public record Person(String name, int age) {}
Person person = new Person("Alice", 25);
// Build a programme
Free<OpticOpKind.Witness, Person> program =
OpticPrograms.modify(person, PersonLenses.age(), age -> age + 1);
// Execute with direct interpreter
DirectOpticInterpreter interpreter = OpticInterpreters.direct();
Person result = interpreter.run(program);
System.out.println(result); // Person("Alice", 26)
When to Use
✅ Production execution: When you just want to run the operations ✅ Simple workflows: When audit trails or validation aren't needed ✅ Performance-critical paths: Minimal overhead
Characteristics
- Fast: No additional processing
- Simple: Executes exactly as described
- No Side Effects: Pure optic operations only
@GenerateLenses
record Employee(String name, int salary, String status) {}
enum PerformanceRating { EXCELLENT, GOOD, SATISFACTORY, POOR }
// Employee management system
public Employee processAnnualReview(
Employee employee,
PerformanceRating rating
) {
Free<OpticOpKind.Witness, Employee> program =
buildReviewProgram(employee, rating);
// Direct execution in production
return OpticInterpreters.direct().run(program);
}
Part 3: The Logging Interpreter
The LoggingOpticInterpreter executes operations whilst recording detailed logs of every operation performed. This is invaluable for:
- Audit trails: Compliance requirements (GDPR, SOX, etc.)
- Debugging: Understanding what happened when
- Monitoring: Tracking data changes in production
Basic Usage
@GenerateLenses
public record Account(String accountId, BigDecimal balance) {}
Account account = new Account("ACC001", new BigDecimal("1000.00"));
// Build a programme
Free<OpticOpKind.Witness, Account> program =
OpticPrograms.modify(
account,
AccountLenses.balance(),
balance -> balance.subtract(new BigDecimal("100.00"))
);
// Execute with logging
LoggingOpticInterpreter logger = OpticInterpreters.logging();
Account result = logger.run(program);
// Review the log
List<String> log = logger.getLog();
log.forEach(System.out::println);
/* Output:
MODIFY: AccountLenses.balance() from 1000.00 to 900.00
*/
Comprehensive Example: Financial Transaction Audit
@GenerateLenses
public record Transaction(
String txnId,
Account from,
Account to,
BigDecimal amount,
LocalDateTime timestamp
) {}
// Build a transfer programme
Free<OpticOpKind.Witness, Transaction> transferProgram(Transaction txn) {
return OpticPrograms.get(txn, TransactionLenses.amount())
.flatMap(amount ->
// Debit source account
OpticPrograms.modify(
txn,
TransactionLenses.from().andThen(AccountLenses.balance()),
balance -> balance.subtract(amount)
)
)
.flatMap(debited ->
// Credit destination account
OpticPrograms.modify(
debited,
TransactionLenses.to().andThen(AccountLenses.balance()),
balance -> balance.add(debited.amount())
)
);
}
// Execute with audit logging
Transaction txn = new Transaction(
"TXN-12345",
new Account("ACC001", new BigDecimal("1000.00")),
new Account("ACC002", new BigDecimal("500.00")),
new BigDecimal("250.00"),
LocalDateTime.now()
);
LoggingOpticInterpreter logger = OpticInterpreters.logging();
Transaction result = logger.run(transferProgram(txn));
// Persist audit trail to database
logger.getLog().forEach(entry -> auditService.record(txn.txnId(), entry));
Log Format
The logging interpreter provides detailed, human-readable logs:
GET: TransactionLenses.amount() -> 250.00
MODIFY: TransactionLenses.from().andThen(AccountLenses.balance()) from 1000.00 to 750.00
MODIFY: TransactionLenses.to().andThen(AccountLenses.balance()) from 500.00 to 750.00
Managing Logs
LoggingOpticInterpreter logger = OpticInterpreters.logging();
// Run first programme
logger.run(program1);
List<String> firstLog = logger.getLog();
// Clear for next programme
logger.clearLog();
// Run second programme
logger.run(program2);
List<String> secondLog = logger.getLog();
The logging interpreter does add overhead (string formatting, list management). For high-frequency operations, consider:
- Using sampling (log every Nth transaction)
- Async logging (log to queue, process later)
- Conditional logging (only for high-value transactions)
Part 4: The Validation Interpreter
The ValidationOpticInterpreter performs a "dry-run" of your programme, checking constraints and collecting errors/warnings without actually executing the operations. This is perfect for:
- Pre-flight checks: Validate before committing
- Testing: Verify logic without side effects
- What-if scenarios: See what would happen
Basic Usage
@GenerateLenses
public record Person(String name, int age) {}
Person person = new Person("Alice", 25);
// Build a programme
Free<OpticOpKind.Witness, Person> program =
OpticPrograms.set(person, PersonLenses.name(), null); // Oops!
// Validate without executing
ValidationOpticInterpreter validator = OpticInterpreters.validating();
ValidationOpticInterpreter.ValidationResult result = validator.validate(program);
if (!result.isValid()) {
// Has errors
result.errors().forEach(System.err::println);
}
if (result.hasWarnings()) {
// Has warnings
result.warnings().forEach(System.out::println);
// Output: "SET operation with null value: PersonLenses.name()"
}
Validation Rules
The validation interpreter checks for:
- Null values: Warns when setting null
- Modifier failures: Errors when modifiers throw exceptions
- Custom constraints: (via custom interpreter subclass)
Real-World Example: Data Migration Validation
@GenerateLenses
public record UserV1(String username, String email, Integer age) {}
@GenerateLenses
public record UserV2(
String username,
String email,
int age, // Now non-null!
boolean verified
) {}
// Migration programme
Free<OpticOpKind.Witness, UserV2> migrateUser(UserV1 oldUser) {
return OpticPrograms.get(oldUser, UserV1Lenses.age())
.flatMap(age -> {
if (age == null) {
// This would fail!
throw new IllegalArgumentException("Age cannot be null in V2");
}
UserV2 newUser = new UserV2(
oldUser.username(),
oldUser.email(),
age,
false
);
return OpticPrograms.pure(newUser);
});
}
// Validate migration for each user
List<UserV1> oldUsers = loadOldUsers();
List<ValidationResult> validations = new ArrayList<>();
for (UserV1 user : oldUsers) {
Free<OpticOpKind.Witness, UserV2> program = migrateUser(user);
ValidationOpticInterpreter validator = OpticInterpreters.validating();
ValidationResult validation = validator.validate(program);
validations.add(validation);
if (!validation.isValid()) {
System.err.println("User " + user.username() + " failed validation:");
validation.errors().forEach(System.err::println);
}
}
// Only proceed if all valid
if (validations.stream().allMatch(ValidationResult::isValid)) {
// Execute migrations with direct interpreter
oldUsers.forEach(user -> {
Free<OpticOpKind.Witness, UserV2> program = migrateUser(user);
UserV2 migrated = OpticInterpreters.direct().run(program);
saveNewUser(migrated);
});
}
Validation Result API
// Simple exception for validation failures
class ValidationException extends RuntimeException {
public ValidationException(String message) {
super(message);
}
public ValidationException(List<String> errors) {
super("Validation failed: " + String.join(", ", errors));
}
}
// Simple exception for business logic failures
class BusinessException extends RuntimeException {
public BusinessException(String message, Throwable cause) {
super(message, cause);
}
}
public record ValidationResult(
List<String> errors, // Blocking issues
List<String> warnings // Non-blocking concerns
) {
public boolean isValid() {
return errors.isEmpty();
}
public boolean hasWarnings() {
return !warnings.isEmpty();
}
}
Use the validation interpreter in unit tests to verify programme structure without executing operations:
@Test
void testProgrammeLogic() {
Free<OpticOpKind.Witness, Person> program =
buildComplexProgram(testData);
ValidationOpticInterpreter validator = OpticInterpreters.validating();
ValidationResult result = validator.validate(program);
// Verify no errors in logic
assertTrue(result.isValid());
}
Part 5: Creating Custom Interpreters
You can create custom interpreters for specific needs: performance profiling, mocking, optimisation, or any other execution strategy.
The Interpreter Interface
All interpreters implement a natural transformation from OpticOp to some effect type (usually Id for simplicity):
public interface OpticInterpreter {
<A> A run(Free<OpticOpKind.Witness, A> program);
}
Example 1: Performance Profiling Interpreter
public final class ProfilingOpticInterpreter {
private final Map<String, Long> executionTimes = new HashMap<>();
private final Map<String, Integer> executionCounts = new HashMap<>();
public <A> A run(Free<OpticOpKind.Witness, A> program) {
Function<Kind<OpticOpKind.Witness, ?>, Kind<IdKind.Witness, ?>> transform =
kind -> {
OpticOp<?, ?> op = OpticOpKindHelper.OP.narrow(
(Kind<OpticOpKind.Witness, Object>) kind
);
String opName = getOperationName(op);
long startTime = System.nanoTime();
// Execute the operation
Object result = executeOperation(op);
long endTime = System.nanoTime();
long duration = endTime - startTime;
// Record metrics
executionTimes.merge(opName, duration, Long::sum);
executionCounts.merge(opName, 1, Integer::sum);
return Id.of(result);
};
Kind<IdKind.Witness, A> resultKind =
program.foldMap(transform, IdMonad.instance());
return IdKindHelper.ID.narrow(resultKind).value();
}
public Map<String, Long> getAverageExecutionTimes() {
Map<String, Long> averages = new HashMap<>();
executionTimes.forEach((op, totalTime) -> {
int count = executionCounts.get(op);
averages.put(op, totalTime / count);
});
return averages;
}
private String getOperationName(OpticOp<?, ?> op) {
return switch (op) {
case OpticOp.Get<?, ?> get -> "GET: " + get.optic().getClass().getSimpleName();
case OpticOp.Set<?, ?> set -> "SET: " + set.optic().getClass().getSimpleName();
case OpticOp.Modify<?, ?> mod -> "MODIFY: " + mod.optic().getClass().getSimpleName();
// ... other cases
default -> "UNKNOWN";
};
}
private Object executeOperation(OpticOp<?, ?> op) {
// Execute using direct interpretation logic
return switch (op) {
case OpticOp.Get<?, ?> get -> get.optic().get(get.source());
case OpticOp.Set<?, ?> set -> set.optic().set(set.newValue(), set.source());
case OpticOp.Modify<?, ?> mod -> {
var current = mod.optic().get(mod.source());
var updated = mod.modifier().apply(current);
yield mod.optic().set(updated, mod.source());
}
// ... other cases
};
}
}
Usage:
Free<OpticOpKind.Witness, Team> program = buildComplexTeamUpdate(team);
ProfilingOpticInterpreter profiler = new ProfilingOpticInterpreter();
Team result = profiler.run(program);
// Analyse performance
Map<String, Long> avgTimes = profiler.getAverageExecutionTimes();
avgTimes.forEach((op, time) ->
System.out.println(op + ": " + time + "ns average")
);
Example 2: Mock Interpreter for Testing
public final class MockOpticInterpreter<S> {
private final S mockData;
public MockOpticInterpreter(S mockData) {
this.mockData = mockData;
}
@SuppressWarnings("unchecked")
public <A> A run(Free<OpticOpKind.Witness, A> program) {
Function<Kind<OpticOpKind.Witness, ?>, Kind<IdKind.Witness, ?>> transform =
kind -> {
OpticOp<?, ?> op = OpticOpKindHelper.OP.narrow(
(Kind<OpticOpKind.Witness, Object>) kind
);
// All operations just return mock data
Object result = switch (op) {
case OpticOp.Get<?, ?> ignored -> mockData;
case OpticOp.Set<?, ?> ignored -> mockData;
case OpticOp.Modify<?, ?> ignored -> mockData;
case OpticOp.GetAll<?, ?> ignored -> List.of(mockData);
case OpticOp.Preview<?, ?> ignored -> Optional.of(mockData);
default -> throw new UnsupportedOperationException(
"Unsupported operation: " + op.getClass().getSimpleName()
);
};
return Id.of(result);
};
Kind<IdKind.Witness, A> resultKind =
program.foldMap(transform, IdMonad.instance());
return IdKindHelper.ID.narrow(resultKind).value();
}
}
Usage in tests:
@Test
void testBusinessLogic() {
// Create mock data
Person mockPerson = new Person("MockUser", 99);
// Build programme (business logic)
Free<OpticOpKind.Witness, Person> program =
buildComplexBusinessLogic(mockPerson);
// Execute with mock interpreter (no real data needed!)
MockOpticInterpreter<Person> mock = new MockOpticInterpreter<>(mockPerson);
Person result = mock.run(program);
// Verify result
assertEquals("MockUser", result.name());
}
Part 6: Combining Interpreters
You can run the same programme through multiple interpreters for powerful workflows:
Pattern 1: Validate-Then-Execute
Free<OpticOpKind.Witness, Order> orderProcessing = buildOrderProgramme(order);
// Step 1: Validate
ValidationOpticInterpreter validator = OpticInterpreters.validating();
ValidationResult validation = validator.validate(orderProcessing);
if (!validation.isValid()) {
validation.errors().forEach(System.err::println);
throw new ValidationException("Order processing failed validation");
}
// Step 2: Execute with logging
LoggingOpticInterpreter logger = OpticInterpreters.logging();
Order result = logger.run(orderProcessing);
// Step 3: Persist audit trail
logger.getLog().forEach(entry -> auditRepository.save(order.id(), entry));
Pattern 2: Profile-Optimise-Execute
Free<OpticOpKind.Witness, Dataset> dataProcessing = buildDataPipeline(dataset);
// Step 1: Profile to find bottlenecks
ProfilingOpticInterpreter profiler = new ProfilingOpticInterpreter();
profiler.run(dataProcessing);
Map<String, Long> times = profiler.getAverageExecutionTimes();
String slowest = times.entrySet().stream()
.max(Map.Entry.comparingByValue())
.map(Map.Entry::getKey)
.orElse("none");
System.out.println("Slowest operation: " + slowest);
// Step 2: Optimise programme based on profiling
Free<OpticOpKind.Witness, Dataset> optimised = optimiseProgramme(
dataProcessing,
slowest
);
// Step 3: Execute optimised programme
Dataset result = OpticInterpreters.direct().run(optimised);
Pattern 3: Test-Validate-Execute Pipeline
// Development: Mock interpreter
MockOpticInterpreter<Order> mockInterp = new MockOpticInterpreter<>(mockOrder);
Order mockResult = mockInterp.run(programme);
assert mockResult.status() == OrderStatus.COMPLETED;
// Staging: Validation interpreter
ValidationResult validation = OpticInterpreters.validating().validate(programme);
assert validation.isValid();
// Production: Logging interpreter
LoggingOpticInterpreter logger = OpticInterpreters.logging();
Order prodResult = logger.run(programme);
logger.getLog().forEach(auditService::record);
Part 7: Best Practices
Choose the Right Interpreter
| Use Case | Interpreter | Reason |
|---|---|---|
| Production CRUD | Direct | Fast, simple |
| Financial transactions | Logging | Audit trail |
| Data migration | Validation | Safety checks |
| Unit tests | Mock/Custom | No dependencies |
| Performance tuning | Profiling | Measure impact |
| Compliance | Logging | Regulatory requirements |
Interpreter Lifecycle
// ✅ Good: Reuse interpreter for multiple programmes
LoggingOpticInterpreter logger = OpticInterpreters.logging();
for (Transaction txn : transactions) {
Free<OpticOpKind.Witness, Transaction> program = buildTransfer(txn);
Transaction result = logger.run(program);
// Log accumulates across programmes
}
List<String> fullAuditTrail = logger.getLog();
// ❌ Bad: Creating new interpreter each time loses history
for (Transaction txn : transactions) {
LoggingOpticInterpreter logger = OpticInterpreters.logging(); // New each time!
Transaction result = logger.run(buildTransfer(txn));
// Can only see this programme's log
}
Error Handling
Free<OpticOpKind.Witness, Order> program = buildOrderProcessing(order);
// Wrap interpreter execution in try-catch
try {
// Validate first
ValidationResult validation = OpticInterpreters.validating().validate(program);
if (!validation.isValid()) {
throw new ValidationException(validation.errors());
}
// Execute with logging
LoggingOpticInterpreter logger = OpticInterpreters.logging();
Order result = logger.run(program);
// Success - persist log
auditRepository.saveAll(logger.getLog());
return result;
} catch (ValidationException e) {
// Handle validation errors
logger.error("Validation failed", e);
throw new BusinessException("Order processing failed validation", e);
} catch (Exception e) {
// Handle execution errors
logger.error("Execution failed", e);
throw new BusinessException("Order processing failed", e);
}
Further Reading
- Interpreter Pattern: Design Patterns: Elements of Reusable Object-Oriented Software - Gang of Four
- Natural Transformations: Category Theory for Programmers by Bartosz Milewski
- Free Monad Interpreters: Why free monads matter by Gabriel Gonzalez
- Aspect-Oriented Programming: AspectJ in Action by Ramnivas Laddad
- Cross-Cutting Concerns: On the Criteria To Be Used in Decomposing Systems into Modules by David Parnas
Next Steps:
- Free Monad DSL for Optics - Building composable programmes
- Fluent API for Optics - Direct execution patterns
- Practical Examples - Real-world applications
Optics - Basic Usage Examples
- Practical application patterns for optics across diverse problem domains
- Building configuration processors, data validators, and API adapters with optics
- Creating reusable optic libraries tailored to your specific business needs
- Performance Optimisation techniques and benchmarking for complex optic compositions
- Testing strategies for optic-based data processing pipelines
- Decision frameworks for choosing the right optic combinations for real-world scenarios
- Common anti-patterns to avoid and best practices for maintainable optic code
This document provides a brief summary of the example classes found in the org.higherkindedj.example.optics package in the HKJ-Examples.
These examples showcase how to use the code generation features (@GenerateLenses, @GeneratePrisms, @GenerateTraversals) and the resulting optics to work with immutable data structures in a clean and powerful way.
LensUsageExample.java
This example is the primary introduction to Lenses. It demonstrates how to automatically generate Lens optics for immutable records and then compose them to read and update deeply nested fields.
- Key Concept: A
Lensprovides a focus on a single field within a product type (like a record or class). - Demonstrates:
- Defining a nested data model (
League,Team,Player). - Using
@GenerateLenseson records to trigger code generation. - Accessing generated Lenses (e.g.,
LeagueLenses.teams()). - Composing Lenses with
andThen()to create a path to a deeply nested field. - Using
get()to read a value andset()to perform an immutable update.
- Defining a nested data model (
// Composing lenses to focus from League -> Team -> name
Lens<League, String> leagueToTeamName = LeagueLenses.teams().andThen(TeamLenses.name());
// Use the composed lens to get and set a value
String teamName = leagueToTeamName.get(league);
League updatedLeague = leagueToTeamName.set("New Team Name").apply(league);
PrismUsageExample.java
This example introduces Prisms. It shows how to generate optics for a sealed interface (a sum type) and use the resulting Prism to focus on a specific implementation of that interface.
- Key Concept: A
Prismprovides a focus on a specific case within a sum type (like a sealed interface or enum). It succeeds if the object is an instance of that case. - Demonstrates:
- Defining a
sealed interface(Shape) with different implementations (Rectangle,Circle). - Using
@GeneratePrismson the sealed interface. - Using the generated
Prismto safely "get" an instance of a specific subtype. - Using
modify()to apply a function only if the object is of the target type.
- Defining a
// Get the generated prism for the Rectangle case
Prism<Shape, Rectangle> rectanglePrism = ShapePrisms.rectangle();
// Safely attempt to modify a shape, which only works if it's a Rectangle
Optional<Shape> maybeUpdated = rectanglePrism.modify(r -> new Rectangle(r.width() + 10, r.height()))
.apply(new Rectangle(5, 10)); // Returns Optional[Rectangle[width=15, height=10]]
Optional<Shape> maybeNotUpdated = rectanglePrism.modify(...)
.apply(new Circle(20.0)); // Returns Optional.empty
TraversalUsageExample.java
This example showcases the power of composing Traversals and Lenses to perform bulk updates on items within nested collections.
- Key Concept: A
Traversalprovides a focus on zero or more elements, such as all items in aListor all values in aMap. - Demonstrates:
- Using
@GenerateTraversalsto create optics for fields that are collections (List<Team>,List<Player>). - Composing a
Traversalwith anotherTraversaland aLensto create a single optic that focuses on a field within every element of a nested collection. - Using
modifyF()with theIdmonad to perform a pure, bulk update (e.g., adding bonus points to every player's score).
- Using
// Compose a path from League -> each Team -> each Player -> score
Traversal<League, Integer> leagueToAllPlayerScores =
LeagueTraversals.teams()
.andThen(TeamTraversals.players())
.andThen(PlayerLenses.score());
// Use the composed traversal to add 5 to every player's score
var updatedLeague = IdKindHelper.ID.narrow(
leagueToAllPlayerScores.modifyF(
score -> Id.of(score + 5), league, IdMonad.instance()
)
).value();
PartsOfTraversalExample.java
This example demonstrates the partsOf combinator for list-level manipulation of traversal focuses. It shows how to convert a Traversal into a Lens on a List, enabling powerful operations like sorting, reversing, and deduplicating focused elements whilst maintaining structure integrity.
- Key Concept:
partsOfbridges element-wise traversal operations and collection-level algorithms by treating all focuses as a single list. - Demonstrates:
- Converting a
Traversal<S, A>into aLens<S, List<A>>withTraversals.partsOf(). - Extracting all focused elements as a list for group-level operations.
- Using convenience methods:
Traversals.sorted(),Traversals.reversed(),Traversals.distinct(). - Custom comparator sorting (case-insensitive, by length, reverse order).
- Combining
partsOfwith filtered traversals for selective list operations. - Understanding size mismatch behaviour (graceful degradation).
- Real-world use case: normalising prices across an e-commerce catalogue.
- Converting a
// Convert traversal to lens on list of all prices
Traversal<Catalogue, Double> allPrices = CatalogueTraversals.categories()
.andThen(CategoryTraversals.products())
.andThen(ProductLenses.price().asTraversal());
Lens<Catalogue, List<Double>> pricesLens = Traversals.partsOf(allPrices);
// Sort all prices across the entire catalogue
Catalogue sortedCatalogue = Traversals.sorted(allPrices, catalogue);
// Reverse prices (highest to lowest)
Catalogue reversedCatalogue = Traversals.reversed(allPrices, sortedCatalogue);
// Remove duplicate product names
List<Product> deduplicatedProducts = Traversals.distinct(nameTraversal, products);
// Sort only in-stock product prices (combining with filtered traversals)
Traversal<List<Product>, Double> inStockPrices = Traversals.<Product>forList()
.filtered(p -> p.stockLevel() > 0)
.andThen(ProductLenses.price().asTraversal());
List<Product> result = Traversals.sorted(inStockPrices, products);
FoldUsageExample.java
This example demonstrates Folds for read-only querying and data extraction from complex structures.
- Key Concept: A
Foldis a read-only optic that focuses on zero or more elements, perfect for queries, searches, and aggregations without modification. - Demonstrates:
- Using
@GenerateFoldsto create query optics automatically. - Using
getAll(),preview(),find(),exists(),all(),isEmpty(), andlength()operations for querying data. - Composing folds for deep queries across nested structures.
- Using standard monoids from
Monoidsutility class (Monoids.doubleAddition(),Monoids.booleanAnd(),Monoids.booleanOr()). - Using
foldMapwith monoids for custom aggregations (sum, product, boolean operations). - Contrasting Fold (read-only) with Traversal (read-write) to express intent clearly.
- Using
// Get all products from an order
Fold<Order, ProductItem> items = OrderFolds.items();
List<ProductItem> allProducts = items.getAll(order);
// Check if any product is out of stock
boolean hasOutOfStock = items.exists(p -> !p.inStock(), order);
// Calculate total price using standard monoid from Monoids utility class
Monoid<Double> sumMonoid = Monoids.doubleAddition();
double total = items.foldMap(sumMonoid, ProductItem::price, order);
// Use boolean monoids for condition checking
Monoid<Boolean> andMonoid = Monoids.booleanAnd();
boolean allAffordable = items.foldMap(andMonoid, p -> p.price() < 1000, order);
// Compose folds for deep queries
Fold<OrderHistory, ProductItem> allProductsInHistory =
OrderHistoryFolds.orders().andThen(OrderFolds.items());
List<ProductItem> allProds = allProductsInHistory.getAll(history);
ValidatedTraversalExample.java
This example demonstrates a more advanced use case for Traversals where the goal is to validate multiple fields on a single object and accumulate all errors.
- Key Concept: A
Traversalcan focus on multiple fields of the same type within a single object. - Demonstrates:
- Defining a
RegistrationFormwith severalStringfields. - Using
@GenerateTraversalswith a customnameparameter to create a singleTraversalthat groups multiple fields (name,email,password). - Using this traversal with
Validatedto run a validation function on each field. - Because
Validatedhas anApplicativethat accumulates errors, the end result is aValidatedobject containing either the original form or a list of all validation failures.
- Defining a
OpticProfunctorExample.java
This comprehensive example demonstrates the profunctor capabilities of optics, showing how to adapt existing optics to work with different data types and structures.
-
Key Concept: Every optic is a profunctor, meaning it can be adapted using
contramap,map, anddimapoperations to work with different source and target types. -
Demonstrates:
- Contramap-style adaptation: Using an existing
Personlens withEmployeeobjects by providing a conversion function. - Map-style adaptation: Transforming the target type of a lens (e.g.,
LocalDateto formattedString). - Dimap-style adaptation: Converting between completely different data representations (e.g., internal models vs external DTOs).
- API Integration: Creating adapters for external API formats whilst reusing internal optics.
- Type-safe wrappers: Working with strongly-typed wrapper classes efficiently.
// Adapt a Person lens to work with Employee objects Lens<Person, String> firstNameLens = PersonLenses.firstName(); Lens<Employee, String> employeeFirstNameLens = firstNameLens.contramap(employee -> employee.personalInfo()); // Adapt a lens to work with different target types Lens<Person, LocalDate> birthDateLens = PersonLenses.birthDate(); Lens<Person, String> birthDateStringLens = birthDateLens.map(date -> date.format(DateTimeFormatter.ISO_LOCAL_DATE)); - Contramap-style adaptation: Using an existing
Traversal Examples
These examples focus on using generated traversals for specific collection and container types, often demonstrating "effectful" traversals where each operation can succeed or fail.
ListTraversalExample.java
- Demonstrates: Traversing a
List<String>field. - Scenario: A
Projecthas a list of team members. The traversal is used with alookupUserfunction that returns aValidatedtype. This allows validating every member in the list. If any lookup fails, the entire operation results in anInvalid.
ArrayTraversalExample.java
- Demonstrates: Traversing an
Integer[]field. - Scenario: A
Surveyhas an array of answers. The traversal is used with a validation function to ensure every answer is within a valid range (1-5), accumulating errors withValidated.
SetTraversalExample.java
- Demonstrates: Traversing a
Set<String>field. - Scenario: A
UserGrouphas a set of member emails. The traversal validates that every email in the set has a valid format (contains "@").
MapValueTraversalExample.java
- Demonstrates: Traversing the values of a
Map<String, Boolean>field. - Scenario: A
FeatureTogglesrecord holds a map of flags. The traversal focuses on everyBooleanvalue in the map, allowing for a bulk update to disable all features at once.
EitherTraversalExample.java
- Demonstrates: Traversing an
Either<String, Integer>field. - Scenario: A
Computationcan result in a success (Right) or failure (Left). The traversal shows thatmodifyFonly affects the value if theEitheris aRight, leaving aLeftuntouched.
MaybeTraversalExample.java
- Demonstrates: Traversing a
Maybe<String>field. - Scenario: A
Configurationhas an optionalproxyHost. The traversal shows that an operation is only applied if theMaybeis aJust, leaving aNothinguntouched, which is analogous to theEitherexample.
OptionalTraversalExample.java
- Demonstrates: Traversing a
java.util.Optional<String>field. - Scenario: A
Userrecord has an optionalmiddleName. The traversal is used to apply a function (liketoUpperCase) to the middle name only if it is present. This shows how to work with standard Java types in a functional way.
TryTraversalExample.java
- Demonstrates: Traversing a
Try<Integer>field. - Scenario: A
NetworkRequestrecord holds the result of an operation that could have thrown an exception, wrapped in aTry. The traversal allows modification of the value only if theTryis aSuccess, leaving aFailure(containing an exception) unchanged.
Previous: Profunctor Optics: Advanced Data Transformation Next: Auditing Complex Data: The Power of Optics
Auditing Complex Data with Optics
A Real-World Deep Dive: The Power of Optics
- Solving complex, real-world data processing challenges with optics
- Building conditional filtering and transformation pipelines
- Combining all four core optic types in a single, powerful composition
- Creating declarative, type-safe alternatives to nested loops and type casting
- Advanced patterns like safe decoding, profunctor adaptations, and audit trails
- When optic composition provides superior solutions to imperative approaches
In modern software, we often work with complex, nested data structures. Performing a seemingly simple task—like "find and decode all production database passwords"—can lead to messy, error-prone code with nested loops, if statements, and manual type casting.
This tutorial demonstrates how to solve a sophisticated, real-world problem elegantly using the full power of higher-kinded-j optics. We'll build a single, declarative, type-safe optic that performs a deep, conditional data transformation.
All the example code for this tutorial can be found in the `org.higherkindedj.example package in the Config Audit example.
Other examples of using Optics can be found here. Optics examples.
🎯 The Challenge: A Conditional Config Audit
Imagine you're responsible for auditing application configurations. Your task is:
Find every encrypted database password, but only for applications deployed to the Google Cloud Platform (
gcp) that are running in theliveenvironment. For each password found, decode it from Base64 into a rawbyte[]for an audit service.
This single sentence implies several operations:
- Deep Traversal: Navigate from a top-level config object down into a list of settings.
- Filtering: Select only settings of a specific type (
EncryptedValue). - Conditional Logic: Apply this logic only if the top-level config meets specific criteria (
gcpandlive). - Data Transformation: Decode the Base64 string into another type (
byte[]).
Doing this imperatively is a recipe for complexity. Let's build it with optics instead.
Think of This Problem Like...
- A treasure hunt with conditional maps: Only certain maps (GCP/Live configs) contain the treasures (encrypted passwords)
- A selective mining operation: Drill down only into the right geological formations (config types) to extract specific minerals (encrypted data)
- A security scanner with filters: Only scan certain types of systems (matching deployment criteria) for specific vulnerabilities (encrypted values)
- A data archaeology expedition: Excavate only specific sites (qualified configs) to uncover particular artifacts (encoded passwords)
🛠️ The Four Tools for the Job
Our solution will compose the four primary optic types, each solving a specific part of the problem.
1. Lens: The Magnifying Glass 🔎
A Lens provides focused access to a field within a product type (like a Java record). We'll use lenses to look inside our configuration objects.
AppConfigLenses.settings(): Zooms from anAppConfigto itsList<Setting>.SettingLenses.value(): Zooms from aSettingto itsSettingValue.
2. Iso: The Universal Translator 🔄
An Iso (Isomorphism) defines a lossless, two-way conversion between two types. It's perfect for handling different representations of the same data.
DeploymentTarget <-> String: We model our deployment target as a structured record but recognise it's isomorphic to a raw string like"gcp|live". AnIsolets us switch between these representations.String <-> byte[]: Base64 is just an encoded representation of a byte array. AnIsois the perfect tool for handling this encoding and decoding.
3. Prism: The Safe Filter 🔬
A Prism provides focused access to a specific case within a sum type (like a sealed interface). It lets us safely attempt to "zoom in" on one variant, failing gracefully if the data is of a different kind.
SettingValuePrisms.encryptedValue(): This is our key filter. It will look at aSettingValueand only succeed if it's theEncryptedValuevariant.
4. Traversal: The Bulk Operator 🗺️
A Traversal lets us operate on zero or more targets within a larger structure. It's the ideal optic for working with collections.
AppConfigTraversals.settings(): This generated optic gives us a single tool to go from anAppConfigto everySettinginside its list.
When to Use This Approach vs Alternatives
Use Optic Composition When:
- Complex conditional filtering - Multiple levels of filtering based on different criteria
- Reusable audit logic - The same audit pattern applies to different config types
- Type-safe data extraction - Ensuring compile-time safety for complex transformations
- Declarative data processing - Building self-documenting processing pipelines
// Perfect for reusable, conditional audit logic Traversal<ServerConfig, byte[]> sensitiveDataAuditor = ServerConfigTraversals.environments() .andThen(EnvironmentPrisms.production().asTraversal()) .andThen(EnvironmentTraversals.credentials()) .andThen(CredentialPrisms.encrypted().asTraversal()) .andThen(EncryptedCredentialIsos.base64ToBytes.asTraversal());
Use Stream Processing When:
- Simple filtering - Basic collection operations without complex nesting
- Performance critical paths - Minimal abstraction overhead needed
- Aggregation logic - Computing statistics or summaries
// Better with streams for simple collection processing List<String> allConfigNames = configs.stream() .map(AppConfig::name) .filter(name -> name.startsWith("prod-")) .collect(toList());
Use Manual Iteration When:
- Early termination - You might want to stop processing on first match
- Complex business logic - Multiple conditions and branches that don't map cleanly
- Legacy integration - Working with existing imperative codebases
// Sometimes manual loops are clearest for complex logic for (AppConfig config : configs) { if (shouldAudit(config) && hasEncryptedData(config)) { auditResults.add(performDetailedAudit(config)); if (auditResults.size() >= MAX\_AUDITS) break; } }
Common Pitfalls
❌ Don't Do This:
// Over-engineering simple cases Traversal<String, String> stringIdentity = Iso.of(s -> s, s -> s).asTraversal(); // Just use the string directly!
// Creating complex compositions inline var passwords = AppConfigLenses.settings().asTraversal() .andThen(SettingLenses.value().asTraversal()) .andThen(SettingValuePrisms.encryptedValue().asTraversal()) // ... 10 more lines of composition .getAll(config); // Hard to understand and reuse
// Ignoring error handling in transformations Iso<String, byte[]> unsafeBase64 = Iso.of( Base64.getDecoder()::decode, // Can throw IllegalArgumentException! Base64.getEncoder()::encodeToString );
// Forgetting to test round-trip properties // No verification that encode(decode(x)) == x
✅ Do This Instead:
// Use appropriate tools for simple cases String configName = config.name(); // Direct access is fine
// Create well-named, reusable compositions public static final Traversal<AppConfig, byte[]> GCP\_LIVE\_ENCRYPTED\_PASSWORDS = gcpLiveOnlyPrism.asTraversal() .andThen(AppConfigTraversals.settings()) .andThen(SettingLenses.value().asTraversal()) .andThen(SettingValuePrisms.encryptedValue().asTraversal()) .andThen(EncryptedValueLenses.base64Value().asTraversal()) .andThen(EncryptedValueIsos.base64.asTraversal());
// Handle errors gracefully Prism<String, byte[]> safeBase64Prism = Prism.of( str -> { try { return Optional.of(Base64.getDecoder().decode(str)); } catch (IllegalArgumentException e) { return Optional.empty(); } }, bytes -> Base64.getEncoder().encodeToString(bytes) );
// Test your compositions @Test public void testBase64RoundTrip() { String original = "test data"; String encoded = Base64.getEncoder().encodeToString(original.getBytes()); byte[] decoded = EncryptedValueIsos.base64.get(encoded); String roundTrip = new String(decoded); assertEquals(original, roundTrip); }
Performance Notes
Optic compositions are optimised for complex data processing:
- Lazy evaluation: Complex filters only run when data actually matches
- Single-pass processing: Compositions traverse data structures only once
- Memory efficient: Only creates new objects for actual transformations
- Compile-time optimisation: Complex optic chains are inlined by the JVM
- Structural sharing: Unchanged parts of data structures are reused
Best Practice: Profile your specific use case and compare with stream-based alternatives:
public class AuditPerformance { // For frequent auditing, create optics once and reuse
private static final Traversal<AppConfig, byte[]> AUDIT_TRAVERSAL = createAuditTraversal();
@Benchmark
public List<byte[]> opticBasedAudit(List<AppConfig> configs) {
return configs.stream()
.flatMap(config -> Traversals.getAll(AUDIT_TRAVERSAL, config).stream())
.collect(toList());
}
@Benchmark
public List<byte[]> streamBasedAudit(List<AppConfig> configs) {
return configs.stream()
.filter(this::isGcpLive)
.flatMap(config -> config.settings().stream())
.map(Setting::value)
.filter(EncryptedValue.class::isInstance)
.map(EncryptedValue.class::cast)
.map(encrypted -> Base64.getDecoder().decode(encrypted.base64Value()))
.collect(toList());
}
}
✨ Composing the Solution
Here's how we chain these optics together. To create the most robust and general-purpose optic (a Traversal), we convert each part of our chain into a Traversal using .asTraversal() before composing it. This ensures type-safety and clarity throughout the process.
The final composed optic has the type Traversal<AppConfig, byte[]> and reads like a declarative path: AppConfig -> (Filter for GCP/Live) -> each Setting -> its Value -> (Filter for Encrypted) -> the inner String -> the raw bytes
// Inside ConfigAuditExample.java
// A. First, create a Prism to act as our top-level filter.
Prism<AppConfig, AppConfig> gcpLiveOnlyPrism = Prism.of(
config -> {
String rawTarget = DeploymentTarget.toRawString().get(config.target());
return "gcp|live".equals(rawTarget) ? Optional.of(config) : Optional.empty();
},
config -> config // The 'build' function is just identity
);
// B. Define the main traversal path to get to the data we want to audit.
Traversal<AppConfig, byte[]> auditTraversal =
AppConfigTraversals.settings() // Traversal<AppConfig, Setting>
.andThen(SettingLenses.value().asTraversal()) // Traversal<AppConfig, SettingValue>
.andThen(SettingValuePrisms.encryptedValue().asTraversal()) // Traversal<AppConfig, EncryptedValue>
.andThen(EncryptedValueLenses.base64Value().asTraversal()) // Traversal<AppConfig, String>
.andThen(EncryptedValueIsos.base64.asTraversal()); // Traversal<AppConfig, byte[]>
// C. Combine the filter and the main traversal into the final optic.
Traversal<AppConfig, byte[]> finalAuditor = gcpLiveOnlyPrism.asTraversal().andThen(auditTraversal);
// D. Using the final optic is now trivial.
// We call a static helper method from our Traversals utility class.
List<byte[]> passwords = Traversals.getAll(finalAuditor, someConfig);
When we call Traversals.getAll(finalAuditor, config), it performs the entire, complex operation and returns a simple List<byte[]> containing only the data we care about.
🚀 Why This is a Powerful Approach
- Declarative & Readable: The optic chain describes what data to get, not how to loop and check for it. The logic reads like a path, making it self-documenting.
- Composable & Reusable: Every optic, and every composition, is a reusable component. We could reuse
gcpLiveOnlyPrismfor other tasks, or swap out the finalbase64Iso to perform a different transformation. - Type-Safe: The entire operation is checked by the Java compiler. It's impossible to, for example, try to decode a
StringValueas if it were encrypted. A mismatch in the optic chain results in a compile-time error, not a runtimeClassCastException. - Architectural Purity: By having all optics share a common abstract parent (
Optic), the library provides universal, lawful composition while allowing for specialised, efficient implementations. - Testable: Each component can be tested independently, and the composition can be tested as a whole.
🧠 Taking It Further
This example is just the beginning. Here are some ideas for extending this solution into a real-world application:
1. Safe Decoding with Validated
The Base64.getDecoder().decode() can throw an IllegalArgumentException. Instead of an Iso, create an AffineTraversal (an optional Prism) that returns a Validated<String, byte[]>, separating successes from failures gracefully.
public static final Prism<String, byte[]> SAFE_BASE64_PRISM = Prism.of(
encoded -> {
try {
return Optional.of(Base64.getDecoder().decode(encoded));
} catch (IllegalArgumentException e) {
return Optional.empty();
}
},
bytes -> Base64.getEncoder().encodeToString(bytes)
);
// Use in a traversal that accumulates both successes and failures
public static AuditResult auditWithErrorReporting(AppConfig config) {
var validatedApplicative = ValidatedMonad.instance(Semigroups.list());
Traversal<AppConfig, String> base64Strings = /* ... path to base64 strings ... */;
Validated<List<String>, List<byte[]>> result = VALIDATED.narrow(
base64Strings.modifyF(
encoded -> SAFE_BASE64_PRISM.getOptional(encoded)
.map(bytes -> VALIDATED.widen(Validated.valid(bytes)))
.orElse(VALIDATED.widen(Validated.invalid(List.of("Invalid base64: " + encoded)))),
config,
validatedApplicative
)
);
return new AuditResult(result);
}
2. Data Migration with modify
What if you need to re-encrypt all passwords with a new algorithm? The same finalAuditor optic can be used with a modify function from the Traversals utility class. You'd write a function byte[] -> byte[] and apply it:
// A function that re-encrypts the raw password bytes
Function<byte[], byte[]> reEncryptFunction = oldBytes -> newCipher.encrypt(oldBytes);
// Use the *exact same optic* to update the config in-place
AppConfig updatedConfig = Traversals.modify(finalAuditor, reEncryptFunction, originalConfig);
3. Profunctor Adaptations for Legacy Systems
Suppose your audit service expects a different data format—perhaps it works with ConfigDto objects instead of AppConfig. Rather than rewriting your carefully crafted optic, you can adapt it using profunctor operations:
// Adapt the auditor to work with legacy DTO format
Traversal<ConfigDto, byte[]> legacyAuditor = finalAuditor.contramap(dto -> convertToAppConfig(dto));
// Or adapt both input and output formats simultaneously
Traversal<ConfigDto, AuditRecord> fullyAdaptedAuditor = finalAuditor.dimap(
dto -> convertToAppConfig(dto), // Convert input format
bytes -> new AuditRecord(bytes, timestamp()) // Convert output format
);
This profunctor capability means your core business logic (the auditing path) remains unchanged whilst adapting to different system interfaces—a powerful example of the Profunctor Optics capabilities.
4. More Complex Filters
Create an optic that filters for deployments on eithergcp or aws but only in the live environment. The composable nature of optics makes building up these complex predicate queries straightforward.
// Multi-cloud live environment filter
Prism<AppConfig, AppConfig> cloudLiveOnlyPrism = Prism.of(
config -> {
String rawTarget = DeploymentTarget.toRawString().get(config.target());
boolean isLiveCloud = rawTarget.equals("gcp|live") ||
rawTarget.equals("aws|live") ||
rawTarget.equals("azure|live");
return isLiveCloud ? Optional.of(config) : Optional.empty();
},
config -> config
);
// Environment-specific processing
public static final Map<String, Traversal<AppConfig, byte[]>> ENVIRONMENT_AUDITORS = Map.of(
"development", devEnvironmentPrism.asTraversal().andThen(auditTraversal),
"staging", stagingEnvironmentPrism.asTraversal().andThen(auditTraversal),
"production", cloudLiveOnlyPrism.asTraversal().andThen(auditTraversal)
);
public static List<byte[]> auditForEnvironment(String environment, AppConfig config) {
return ENVIRONMENT_AUDITORS.getOrDefault(environment, Traversal.empty())
.getAll(config);
}
5. Configuration Validation
Use the same optics to validate your configuration. You could compose a traversal that finds all IntValue settings with the key "server.port" and use .getAll() to check if their values are within a valid range (e.g., > 1024).
public static final Traversal<AppConfig, Integer> SERVER_PORTS =
AppConfigTraversals.settings()
.andThen(settingWithKey("server.port"))
.andThen(SettingLenses.value().asTraversal())
.andThen(SettingValuePrisms.intValue().asTraversal())
.andThen(IntValueLenses.value().asTraversal());
public static List<String> validatePorts(AppConfig config) {
return Traversals.getAll(SERVER_PORTS, config).stream()
.filter(port -> port <= 1024 || port > 65535)
.map(port -> "Invalid port: " + port + " (must be 1024-65535)")
.collect(toList());
}
6. Audit Trail Generation
Extend the auditor to generate comprehensive audit trails:
public record AuditEntry(String configName, String settingKey, String encryptedValue,
Instant auditTime, String auditorId) {}
public static final Traversal<AppConfig, AuditEntry> AUDIT_TRAIL_GENERATOR =
gcpLiveOnlyPrism.asTraversal()
.andThen(AppConfigTraversals.settings())
.andThen(settingFilter)
.andThen(auditEntryMapper);
// Generate complete audit report
public static AuditReport generateAuditReport(List<AppConfig> configs, String auditorId) {
List<AuditEntry> entries = configs.stream()
.flatMap(config -> Traversals.getAll(AUDIT_TRAIL_GENERATOR, config).stream())
.collect(toList());
return new AuditReport(entries, Instant.now(), auditorId);
}
This combination of composability, type safety, and profunctor adaptability makes higher-kinded-j optics incredibly powerful for real-world data processing scenarios, particularly in enterprise environments where data formats, security requirements, and compliance needs are constantly evolving.
Previous: Optics Examples
Working with Core Types and Optics

As you've learnt from the previous chapters, optics provide a powerful way to focus on and modify immutable data structures. But what happens when the data you're working with is wrapped in Higher-Kinded-J's core types—Maybe, Either, Validated, or Try?
Traditional optics work brilliantly with straightforward, deterministic data. However, real-world applications rarely deal with such certainty. Fields might be null, operations might fail, validation might produce errors, and database calls might throw exceptions. Handling these scenarios whilst maintaining clean, composable optics code requires a bridge between these two powerful abstractions.
This is where Core Type Integration comes in.
The Challenge
Consider a typical scenario: updating a user profile where some fields are optional, validation might fail, and the database operation might throw an exception.
public User updateUserProfile(User user, String newEmail) {
// Null checking
if (user == null || user.getProfile() == null) {
return null; // Or throw exception?
}
// Validation
if (newEmail == null || !newEmail.contains("@")) {
throw new ValidationException("Invalid email");
}
// Try to update
try {
String validated = validateEmailFormat(newEmail);
Profile updated = user.getProfile().withEmail(validated);
return user.withProfile(updated);
} catch (Exception e) {
// Now what? Log and return null? Re-throw?
log.error("Failed to update email", e);
return null;
}
}
This code is a mess of concerns: null handling, validation logic, exception management, and the actual update logic are all tangled together.
public Either<String, User> updateUserProfile(User user, String newEmail) {
Lens<User, Profile> profileLens = UserLenses.profile();
Lens<Profile, String> emailLens = ProfileLenses.email();
Lens<User, String> userToEmail = profileLens.andThen(emailLens);
return modifyEither(
userToEmail,
email -> validateEmail(email),
user
);
}
private Either<String, String> validateEmail(String email) {
if (email == null || !email.contains("@")) {
return Either.left("Invalid email format");
}
return Either.right(email.toLowerCase());
}
Clean separation of concerns:
- Optics define the path to the data
- Core types handle the errors
- Business logic stays pure and testable
Three Complementary Approaches
Higher-Kinded-J provides three integrated solutions for working with core types and optics:
All the extension methods shown here can also be accessed through Higher-Kinded-J's Fluent API, which provides a more Java-friendly syntax for optic operations. The examples below use static imports for conciseness, but you can also use OpticOps methods for a more discoverable API.
1. Core Type Prisms 🔬 — Pattern Matching on Functional Types
Extract values from Maybe, Either, Validated, and Try using prisms, just as you would with sealed interfaces.
Prism<Maybe<User>, User> justPrism = Prisms.just();
Prism<Try<Order>, Order> successPrism = Prisms.success();
// Extract user if present
Optional<User> user = justPrism.getOptional(maybeUser);
// Extract order if successful
Optional<Order> order = successPrism.getOptional(tryOrder);
Best for: Safe extraction and pattern matching on core types, composing with other optics.
2. Lens Extensions 🛡️ — Safety Rails for Lens Operations
Augment lenses with built-in null safety, validation, and exception handling.
Lens<User, String> emailLens = UserLenses.email();
// Null-safe access
Maybe<String> email = getMaybe(emailLens, user);
// Validated modification
Either<String, User> updated = modifyEither(
emailLens,
email -> validateEmail(email),
user
);
// Exception-safe database operation
Try<User> saved = modifyTry(
emailLens,
email -> Try.of(() -> updateInDatabase(email)),
user
);
Best for: Individual field operations with validation, null-safe access, exception handling.
3. Traversal Extensions 🗺️ — Bulk Operations with Error Handling
Process collections using traversals whilst accumulating errors or failing fast.
Traversal<List<Order>, BigDecimal> allPrices =
Traversals.forList().andThen(OrderLenses.price().asTraversal());
// Validate all prices (accumulate errors)
Validated<List<String>, List<Order>> result = modifyAllValidated(
allPrices,
price -> validatePrice(price),
orders
);
// Or fail fast at first error
Either<String, List<Order>> fastResult = modifyAllEither(
allPrices,
price -> validatePrice(price),
orders
);
Best for: Bulk validation, batch processing, error accumulation vs fail-fast strategies.
When to Use Each Approach
Use Core Type Prisms when:
- ✅ Extracting values from
Maybe,Either,Validated, orTry - ✅ Pattern matching on functional types without
instanceof - ✅ Composing core types with other optics for deep navigation
- ✅ Safely handling optional API responses or database results
Use Lens Extensions when:
- ✅ Accessing potentially null fields
- ✅ Validating single field updates
- ✅ Performing operations that might throw exceptions
- ✅ Implementing form validation with immediate feedback
Use Traversal Extensions when:
- ✅ Validating collections of data
- ✅ Batch processing with error accumulation
- ✅ Applying bulk updates with validation
- ✅ Counting valid items or collecting errors
The Power of Composition
The real magic happens when you combine these approaches:
// Complete order processing pipeline
Order order = ...;
// 1. Extract customer using prism (Maybe)
Prism<Maybe<Customer>, Customer> justPrism = Prisms.just();
Maybe<Customer> maybeCustomer = order.getCustomer();
// 2. Validate customer email using lens extension
Lens<Customer, String> emailLens = CustomerLenses.email();
Either<String, Customer> validatedCustomer =
maybeCustomer.map(customer ->
modifyEither(emailLens, email -> validateEmail(email), customer)
).orElse(Either.left("No customer"));
// 3. Validate all order items using traversal extension
Traversal<List<OrderItem>, BigDecimal> allPrices =
Traversals.forList().andThen(OrderItemLenses.price().asTraversal());
Validated<List<String>, List<OrderItem>> validatedItems =
modifyAllValidated(
allPrices,
price -> validatePrice(price),
order.getItems()
);
// Combine results...
Real-World Examples
All three approaches are demonstrated with comprehensive, runnable examples:
- CoreTypePrismsExample — API response processing
- LensExtensionsExample — User profile validation
- TraversalExtensionsExample — Bulk order processing
- IntegrationPatternsExample — Complete e-commerce workflow
Key Benefits
🎯 Separation of Concerns
Business logic, validation, and error handling remain cleanly separated. Optics define the structure, core types handle the effects.
🔄 Composability
All three approaches compose seamlessly with each other and with standard optics operations.
📊 Error Accumulation
Choose between fail-fast (stop at first error) or error accumulation (collect all errors) based on your requirements.
🛡️ Type Safety
The compiler ensures you handle all cases. No silent failures, no unexpected nulls.
📖 Readability
Code reads like the business logic it implements, without defensive programming clutter.
Understanding the Core Types
Before diving into the integration patterns, ensure you're familiar with Higher-Kinded-J's core types:
- Maybe — Represents optional values (similar to
Optional) - Either — Represents a value that can be one of two types (success or error)
- Validated — Like
Either, but accumulates errors - Try — Represents a computation that may throw an exception
Common Pitfalls
Whilst all three core type families work with optics, mixing them inappropriately can lead to confusing code:
// ❌ Confusing: Mixing Maybe and Either unnecessarily
Maybe<Either<String, User>> confusing = ...;
// ✅ Better: Choose one based on your needs
Either<String, User> clear = ...; // If you have an error message
Maybe<User> simple = ...; // If it's just presence/absence
When in doubt, start with Either. It's the most versatile:
- Carries error information (unlike
Maybe) - Fails fast (unlike
Validated) - Doesn't catch exceptions automatically (unlike
Try)
You can always switch to Validated for error accumulation or Try for exception handling when needed.
Next Steps
Now that you understand the three complementary approaches, dive into each one:
- Core Type Prisms — Start here to learn safe extraction
- Lens Extensions — Master validated field operations
- Traversal Extensions — Handle bulk operations
Each guide includes detailed examples, best practices, and common patterns you'll use every day.
Next: Core Type Prisms: Safe Extraction
Core Type Prisms: Safe Extraction and Pattern Matching

Prisms are optics that focus on one case of a sum type. They're perfect for safely extracting values from Maybe, Either, Validated, and Try without verbose pattern matching or null checks.
Think of a prism like a quality inspector at a factory sorting line. It can:
- Identify whether an item matches a specific case (
matches()) - Extract the value if it matches (
getOptional()) - Construct a new value of that case (
build())
This chapter shows you how to use prisms to work elegantly with Higher-Kinded-J's core types.
The Problem: Verbose Pattern Matching
Before we dive into prisms, let's see the traditional approach:
// Extracting from Maybe
Maybe<User> maybeUser = getUserById("u123");
if (maybeUser.isJust()) {
User user = maybeUser.get();
processUser(user);
} else {
handleMissingUser();
}
// Extracting from Either
Either<String, Order> result = createOrder(request);
if (result.isRight()) {
Order order = result.fold(err -> null, ord -> ord);
saveOrder(order);
} else {
String error = result.fold(err -> err, ord -> null);
logError(error);
}
// Extracting from Try
Try<Connection> tryConnection = connectToDatabase();
if (tryConnection.isSuccess()) {
Connection conn = tryConnection.fold(c -> c, ex -> null);
useConnection(conn);
} else {
Throwable error = tryConnection.fold(c -> null, ex -> ex);
handleError(error);
}
This code is repetitive, error-prone, and hard to compose with other operations.
// Extracting from Maybe
Prism<Maybe<User>, User> justPrism = Prisms.just();
getUserById("u123")
.flatMap(justPrism::getOptional)
.ifPresent(this::processUser);
// Extracting from Either
Prism<Either<String, Order>, Order> rightPrism = Prisms.right();
Prism<Either<String, Order>, String> leftPrism = Prisms.left();
Either<String, Order> result = createOrder(request);
rightPrism.getOptional(result).ifPresent(this::saveOrder);
leftPrism.getOptional(result).ifPresent(this::logError);
// Extracting from Try
Prism<Try<Connection>, Connection> successPrism = Prisms.success();
Prism<Try<Connection>, Throwable> failurePrism = Prisms.failure();
Try<Connection> tryConnection = connectToDatabase();
successPrism.getOptional(tryConnection).ifPresent(this::useConnection);
failurePrism.getOptional(tryConnection).ifPresent(this::handleError);
Clean, composable, and type-safe. The prisms handle the pattern matching internally.
Available Prisms
Higher-Kinded-J provides prisms for all core types in the Prisms utility class:
Prisms can also be used through the Fluent API for method chaining and better discoverability. For example, prism operations like getOptional and modify can be accessed through OpticOps methods for a more fluent interface.
Maybe Prisms
// Extract value from Just, returns empty Optional for Nothing
Prism<Maybe<A>, A> justPrism = Prisms.just();
Maybe<String> present = Maybe.just("Hello");
Maybe<String> absent = Maybe.nothing();
Optional<String> value = justPrism.getOptional(present); // Optional["Hello"]
Optional<String> empty = justPrism.getOptional(absent); // Optional.empty()
// Construct Maybe.just() from a value
Maybe<String> built = justPrism.build("World"); // Maybe.just("World")
// Check if it's a Just
boolean isJust = justPrism.matches(present); // true
boolean isNothing = justPrism.matches(absent); // false
Use Prisms.just() when:
- Extracting optional API response data
- Composing with other optics to navigate nested structures
- Converting
MaybetoOptionalfor interop with Java APIs - Filtering collections of
Maybevalues
Either Prisms
// Extract from Left and Right cases
Prism<Either<L, R>, L> leftPrism = Prisms.left();
Prism<Either<L, R>, R> rightPrism = Prisms.right();
Either<String, Integer> success = Either.right(42);
Either<String, Integer> failure = Either.left("Error");
// Extract success value
Optional<Integer> value = rightPrism.getOptional(success); // Optional[42]
Optional<Integer> noValue = rightPrism.getOptional(failure); // Optional.empty()
// Extract error value
Optional<String> noError = leftPrism.getOptional(success); // Optional.empty()
Optional<String> error = leftPrism.getOptional(failure); // Optional["Error"]
// Construct Either values
Either<String, Integer> newSuccess = rightPrism.build(100); // Either.right(100)
Either<String, Integer> newFailure = leftPrism.build("Oops"); // Either.left("Oops")
// Check which case
boolean isRight = rightPrism.matches(success); // true
boolean isLeft = leftPrism.matches(failure); // true
List<Either<String, User>> validationResults = validateUsers(requests);
Prism<Either<String, User>, User> validPrism = Prisms.right();
Prism<Either<String, User>, String> errorPrism = Prisms.left();
// Collect all successful users
List<User> validUsers = validationResults.stream()
.flatMap(result -> validPrism.getOptional(result).stream())
.toList();
// Collect all error messages
List<String> errors = validationResults.stream()
.flatMap(result -> errorPrism.getOptional(result).stream())
.toList();
System.out.println("Successfully validated: " + validUsers.size() + " users");
System.out.println("Validation errors: " + errors);
Validated Prisms
// Extract from Valid and Invalid cases
Prism<Validated<E, A>, A> validPrism = Prisms.valid();
Prism<Validated<E, A>, E> invalidPrism = Prisms.invalid();
Validated<String, Integer> valid = Validated.valid(30);
Validated<String, Integer> invalid = Validated.invalid("Age must be positive");
// Extract valid value
Optional<Integer> age = validPrism.getOptional(valid); // Optional[30]
Optional<Integer> noAge = validPrism.getOptional(invalid); // Optional.empty()
// Extract validation error
Optional<String> noError = invalidPrism.getOptional(valid); // Optional.empty()
Optional<String> error = invalidPrism.getOptional(invalid); // Optional["Age must be positive"]
// Construct Validated values
Validated<String, Integer> newValid = validPrism.build(25); // Validated.valid(25)
Validated<String, Integer> newInvalid = invalidPrism.build("Error"); // Validated.invalid("Error")
Validated and Either have similar prisms, but serve different purposes:
- Either prisms: Use for fail-fast validation (stop at first error)
- Validated prisms: Use with error accumulation (collect all errors)
The prisms themselves work identically—the difference is in how you combine multiple validations.
Try Prisms
// Extract from Success and Failure cases
Prism<Try<A>, A> successPrism = Prisms.success();
Prism<Try<A>, Throwable> failurePrism = Prisms.failure();
Try<Integer> success = Try.success(42);
Try<Integer> failure = Try.failure(new RuntimeException("Database error"));
// Extract success value
Optional<Integer> value = successPrism.getOptional(success); // Optional[42]
Optional<Integer> noValue = successPrism.getOptional(failure); // Optional.empty()
// Extract exception
Optional<Throwable> noEx = failurePrism.getOptional(success); // Optional.empty()
Optional<Throwable> ex = failurePrism.getOptional(failure); // Optional[RuntimeException]
// Construct Try values
Try<Integer> newSuccess = successPrism.build(100); // Try.success(100)
Try<Integer> newFailure = failurePrism.build(new IllegalStateException("Oops"));
List<Try<User>> dbResults = List.of(
Try.of(() -> fetchUser("u1")),
Try.of(() -> fetchUser("u2")),
Try.of(() -> fetchUser("u3"))
);
Prism<Try<User>, User> successPrism = Prisms.success();
Prism<Try<User>, Throwable> failurePrism = Prisms.failure();
// Count successful loads
long successCount = dbResults.stream()
.filter(successPrism::matches)
.count();
// Log all errors
dbResults.stream()
.flatMap(result -> failurePrism.getOptional(result).stream())
.forEach(error -> logger.error("Database error: {}", error.getMessage()));
System.out.println("Loaded " + successCount + "/" + dbResults.size() + " users");
Traversals for Core Types
Whilst prisms extract values, traversals modify values inside core types. Higher-Kinded-J provides traversal utilities for all core types:
Maybe Traversals
import org.higherkindedj.optics.util.MaybeTraversals;
Traversal<Maybe<String>, String> justTraversal = MaybeTraversals.just();
// Modify value inside Just
Maybe<String> original = Maybe.just("hello");
Maybe<String> modified = Traversals.modify(justTraversal, String::toUpperCase, original);
// Result: Maybe.just("HELLO")
// No effect on Nothing
Maybe<String> nothing = Maybe.nothing();
Maybe<String> unchanged = Traversals.modify(justTraversal, String::toUpperCase, nothing);
// Result: Maybe.nothing()
Either Traversals
import org.higherkindedj.optics.util.EitherTraversals;
Traversal<Either<String, Integer>, Integer> rightTraversal = EitherTraversals.right();
Traversal<Either<String, Integer>, String> leftTraversal = EitherTraversals.left();
// Modify Right value
Either<String, Integer> success = Either.right(100);
Either<String, Integer> doubled = Traversals.modify(rightTraversal, n -> n * 2, success);
// Result: Either.right(200)
// Enrich Left value (error enrichment)
Either<String, Integer> error = Either.left("Connection failed");
Either<String, Integer> enriched = Traversals.modify(
leftTraversal,
msg -> "[ERROR] " + msg,
error
);
// Result: Either.left("[ERROR] Connection failed")
The EitherTraversals.left() traversal is excellent for error enrichment—adding context or formatting to error messages without unwrapping the Either:
Either<String, Order> result = processOrder(request);
// Add request ID to all errors
Either<String, Order> enriched = Traversals.modify(
EitherTraversals.left(),
error -> String.format("[Request %s] %s", requestId, error),
result
);
Validated Traversals
import org.higherkindedj.optics.util.ValidatedTraversals;
Traversal<Validated<String, Integer>, Integer> validTraversal = ValidatedTraversals.valid();
Traversal<Validated<String, Integer>, String> invalidTraversal = ValidatedTraversals.invalid();
// Modify valid value
Validated<String, Integer> valid = Validated.valid(30);
Validated<String, Integer> incremented = Traversals.modify(validTraversal, age -> age + 1, valid);
// Result: Validated.valid(31)
// Modify error
Validated<String, Integer> invalid = Validated.invalid("Age required");
Validated<String, Integer> formatted = Traversals.modify(
invalidTraversal,
err -> "Validation Error: " + err,
invalid
);
// Result: Validated.invalid("Validation Error: Age required")
Try Traversals
import org.higherkindedj.optics.util.TryTraversals;
Traversal<Try<Integer>, Integer> successTraversal = TryTraversals.success();
Traversal<Try<Integer>, Throwable> failureTraversal = TryTraversals.failure();
// Modify success value
Try<Integer> success = Try.success(42);
Try<Integer> doubled = Traversals.modify(successTraversal, n -> n * 2, success);
// Result: Try.success(84)
// Wrap exceptions
Try<Integer> failure = Try.failure(new SQLException("Connection lost"));
Try<Integer> wrapped = Traversals.modify(
failureTraversal,
cause -> new DatabaseException("Database error", cause),
failure
);
// Result: Try.failure(DatabaseException wrapping SQLException)
Composition: The Real Power
Prisms compose seamlessly with lenses and other optics to navigate deeply nested structures:
@GenerateLenses
record ApiResponse(int statusCode, Maybe<Order> data, List<String> warnings) {}
@GenerateLenses
record Order(String orderId, Customer customer, List<OrderItem> items) {}
@GenerateLenses
record Customer(String customerId, String name, String email) {}
// Get customer email from API response (if present)
ApiResponse response = fetchOrder("ORD-123");
// Method 1: Using prism directly
Prism<Maybe<Order>, Order> justPrism = Prisms.just();
Optional<String> email = justPrism.getOptional(response.data())
.map(order -> order.customer().email());
// Method 2: Compose with lenses for a complete path
Lens<ApiResponse, Maybe<Order>> dataLens = ApiResponseLenses.data();
Traversal<Maybe<Order>, Order> orderTraversal = MaybeTraversals.just();
Lens<Order, Customer> customerLens = OrderLenses.customer();
Lens<Customer, String> emailLens = CustomerLenses.email();
// Full composition: ApiResponse -> Maybe<Order> -> Order -> Customer -> email
Traversal<ApiResponse, String> emailPath = dataLens
.andThen(orderTraversal)
.andThen(customerLens.asTraversal())
.andThen(emailLens.asTraversal());
List<String> emails = Traversals.toListOf(emailPath, response);
// Result: ["customer@example.com"] or [] if no order data
Processing Collections of Core Types
Prisms excel at filtering and extracting from collections of Maybe, Either, Validated, or Try:
Extracting Successes
List<Try<User>> dbResults = loadUsersFromDatabase(userIds);
Prism<Try<User>, User> successPrism = Prisms.success();
// Get all successfully loaded users
List<User> users = dbResults.stream()
.flatMap(result -> successPrism.getOptional(result).stream())
.toList();
Extracting Failures
List<Either<ValidationError, Order>> validations = validateOrders(requests);
Prism<Either<ValidationError, Order>, ValidationError> errorPrism = Prisms.left();
// Collect all validation errors
List<ValidationError> errors = validations.stream()
.flatMap(result -> errorPrism.getOptional(result).stream())
.toList();
if (!errors.isEmpty()) {
displayErrorsToUser(errors);
}
Counting Cases
List<Validated<List<String>, Product>> validations = validateProducts(products);
Prism<Validated<List<String>, Product>, Product> validPrism = Prisms.valid();
long validCount = validations.stream()
.filter(validPrism::matches)
.count();
System.out.println(validCount + "/" + validations.size() + " products valid");
Common Patterns
Pattern 1: Optional Chaining with Maybe
Instead of nested if (isJust()) checks:
// ❌ Traditional
Maybe<User> maybeUser = findUser(id);
if (maybeUser.isJust()) {
User user = maybeUser.get();
Maybe<Address> maybeAddress = user.getAddress();
if (maybeAddress.isJust()) {
Address address = maybeAddress.get();
processAddress(address);
}
}
// ✅ With prisms
Prism<Maybe<User>, User> justUserPrism = Prisms.just();
Prism<Maybe<Address>, Address> justAddressPrism = Prisms.just();
findUser(id)
.flatMap(justUserPrism::getOptional)
.flatMap(user -> user.getAddress())
.flatMap(justAddressPrism::getOptional)
.ifPresent(this::processAddress);
Pattern 2: Error Handling with Either
Extracting specific error types:
sealed interface AppError permits ValidationError, DatabaseError, NetworkError {}
Either<AppError, User> result = createUser(request);
Prism<Either<AppError, User>, User> successPrism = Prisms.right();
Prism<Either<AppError, User>, AppError> errorPrism = Prisms.left();
// Handle success
successPrism.getOptional(result).ifPresent(user -> {
logger.info("User created: {}", user.id());
sendWelcomeEmail(user);
});
// Handle errors
errorPrism.getOptional(result).ifPresent(error -> {
switch (error) {
case ValidationError ve -> displayFormErrors(ve);
case DatabaseError de -> retryOrFallback(de);
case NetworkError ne -> scheduleRetry(ne);
}
});
Pattern 3: Exception Recovery with Try
Try<Config> configResult = Try.of(() -> loadConfig(configPath));
Prism<Try<Config>, Config> successPrism = Prisms.success();
Prism<Try<Config>, Throwable> failurePrism = Prisms.failure();
// Use config if loaded successfully
Config config = successPrism.getOptional(configResult)
.orElseGet(() -> {
// Log the failure
failurePrism.getOptional(configResult).ifPresent(error ->
logger.error("Failed to load config", error)
);
// Return default config
return Config.defaults();
});
Before/After Comparison
Let's see a complete real-world scenario comparing traditional approaches with prisms:
Scenario: Processing a batch of API responses, each containing optional user data.
public List<String> extractUserEmails(List<ApiResponse<User>> responses) {
List<String> emails = new ArrayList<>();
for (ApiResponse<User> response : responses) {
if (response.statusCode() == 200) {
Maybe<User> data = response.data();
if (data.isJust()) {
User user = data.get();
if (user.email() != null) {
emails.add(user.email());
}
}
}
}
return emails;
}
Problems:
- Deeply nested conditionals
- Manual null checking
- Imperative style with mutable list
- Easy to introduce bugs
public List<String> extractUserEmails(List<ApiResponse<User>> responses) {
Prism<Maybe<User>, User> justPrism = Prisms.just();
return responses.stream()
.filter(r -> r.statusCode() == 200)
.flatMap(r -> justPrism.getOptional(r.data()).stream())
.map(User::email)
.filter(Objects::nonNull)
.toList();
}
Benefits:
- Flat, readable pipeline
- Prism handles the Maybe extraction
- Declarative, functional style
- Harder to introduce bugs
Best Practices
Use prisms when:
- Extracting values from core types
- Pattern matching on sum types
- Composing with other optics for deep navigation
- Processing collections of core types
Use traversals when:
- Modifying values inside core types
- Applying transformations conditionally
- Error enrichment or exception wrapping
Remember that prism.getOptional() returns Java's Optional, not Maybe:
Prism<Maybe<String>, String> justPrism = Prisms.just();
Maybe<String> maybeValue = Maybe.just("Hello");
// Returns Optional, not Maybe
Optional<String> value = justPrism.getOptional(maybeValue);
// Convert back to Maybe if needed
Maybe<String> backToMaybe = value
.map(Maybe::just)
.orElse(Maybe.nothing());
Working Example
For a complete, runnable demonstration of all these patterns, see:
This example demonstrates:
- All core type prisms (Maybe, Either, Validated, Try)
- All core type traversals
- Composition with lenses
- Processing collections
- Before/after comparisons
- Real-world API response processing
Summary
Core type prisms provide:
🎯 Safe Extraction — Extract values from Maybe, Either, Validated, and Try without null checks or verbose pattern matching
🔍 Pattern Matching — Use matches() to check cases, getOptional() to extract values
🔄 Composability — Combine with lenses and traversals for deep navigation
📊 Collection Processing — Filter, extract, and count different cases in collections
🛡️ Type Safety — The compiler ensures you handle all cases correctly
Next Steps
Now that you understand core type prisms, learn how to enhance lens operations with validation and error handling:
Next: Lens Extensions: Validated Field Operations
Or return to the overview:
Back: Working with Core Types and Optics
Lens Extensions: Validated Field Operations

Lenses provide a composable way to focus on and update fields in immutable data structures. But what happens when those fields might be null, updates require validation, or operations might throw exceptions?
Traditional lenses work brilliantly with clean, valid data. Real-world applications, however, deal with nullable fields, validation requirements, and exception-prone operations. Lens Extensions bridge this gap by augmenting lenses with built-in support for Higher-Kinded-J's core types.
Think of lens extensions as safety rails for your lenses—they catch null values, validate modifications, and handle exceptions whilst maintaining the elegance of functional composition.
The Problem: Defensive Programming Clutter
Let's see what happens when we try to use lenses with real-world messy data:
public User updateUserEmail(User user, String newEmail) {
Lens<User, String> emailLens = UserLenses.email();
// Null checking
if (user == null) {
throw new NullPointerException("User cannot be null");
}
String currentEmail = emailLens.get(user);
if (currentEmail == null) {
// Now what? Set default? Throw exception?
}
// Validation
if (newEmail == null || !newEmail.contains("@")) {
throw new ValidationException("Invalid email format");
}
// Update
try {
String validated = validateEmailFormat(newEmail);
return emailLens.set(validated, user);
} catch (Exception e) {
// Handle exception, but lens already called set()
throw new RuntimeException("Update failed", e);
}
}
The lens operation is buried under layers of null checks, validation, and exception handling.
public Either<String, User> updateUserEmail(User user, String newEmail) {
Lens<User, String> emailLens = UserLenses.email();
return modifyEither(
emailLens,
email -> validateEmail(email), // Returns Either<String, String>
user
);
}
private Either<String, String> validateEmail(String email) {
if (email == null || !email.contains("@")) {
return Either.left("Invalid email format");
}
return Either.right(email.toLowerCase());
}
Clean separation: the lens defines where to update, the validation function defines what is valid, and Either carries the result or error. No defensive programming clutter.
Available Lens Extensions
Higher-Kinded-J provides lens extensions in the LensExtensions utility class. All methods are static, designed for import with import static:
import static org.higherkindedj.optics.extensions.LensExtensions.*;
These extension methods are also available through the Fluent API, which provides method chaining and a more discoverable interface. For example, getMaybe(lens, source) can also be written as OpticOps.getting(source).through(lens).asMaybe().
Safe Access Methods
These methods safely get values from fields that might be null:
getMaybe — Null-Safe Field Access
public static <S, A> Maybe<A> getMaybe(Lens<S, A> lens, S source)
Returns Maybe.just(value) if the field is non-null, Maybe.nothing() otherwise.
Lens<UserProfile, String> bioLens = UserProfileLenses.bio();
UserProfile withBio = new UserProfile("u1", "Alice", "alice@example.com", 30, "Software Engineer");
Maybe<String> bio = getMaybe(bioLens, withBio); // Maybe.just("Software Engineer")
UserProfile withoutBio = new UserProfile("u2", "Bob", "bob@example.com", 25, null);
Maybe<String> noBio = getMaybe(bioLens, withoutBio); // Maybe.nothing()
// Use with default
String displayBio = bio.orElse("No bio provided");
Use getMaybe when:
- Accessing optional fields (bio, middle name, phone number)
- Avoiding
NullPointerExceptionwhen calling methods on the field - Composing multiple optional accesses
- Converting between optics and functional style
getEither — Access with Default Error
public static <S, A, E> Either<E, A> getEither(Lens<S, A> lens, E error, S source)
Returns Either.right(value) if non-null, Either.left(error) if null.
Lens<UserProfile, Integer> ageLens = UserProfileLenses.age();
UserProfile validProfile = new UserProfile("u1", "Alice", "alice@example.com", 30, "Engineer");
Either<String, Integer> age = getEither(ageLens, "Age not provided", validProfile);
// Either.right(30)
UserProfile invalidProfile = new UserProfile("u2", "Bob", "bob@example.com", null, "Student");
Either<String, Integer> noAge = getEither(ageLens, "Age not provided", invalidProfile);
// Either.left("Age not provided")
// Use in a pipeline
String message = age.fold(
error -> "Error: " + error,
a -> "Age: " + a
);
getValidated — Access with Validation Error
public static <S, A, E> Validated<E, A> getValidated(Lens<S, A> lens, E error, S source)
Like getEither, but returns Validated for consistency with validation workflows.
Lens<UserProfile, String> emailLens = UserProfileLenses.email();
UserProfile profile = new UserProfile("u1", "Alice", "alice@example.com", 30, "Engineer");
Validated<String, String> email = getValidated(emailLens, "Email is required", profile);
// Validated.valid("alice@example.com")
UserProfile noEmail = new UserProfile("u2", "Bob", null, 25, "Student");
Validated<String, String> missing = getValidated(emailLens, "Email is required", noEmail);
// Validated.invalid("Email is required")
Modification Methods
These methods modify fields with validation, null-safety, or exception handling:
modifyMaybe — Optional Modifications
public static <S, A> Maybe<S> modifyMaybe(
Lens<S, A> lens,
Function<A, Maybe<A>> modifier,
S source)
Apply a modification that might not succeed. Returns Maybe.just(updated) if the modification succeeds, Maybe.nothing() if it fails.
Lens<UserProfile, String> nameLens = UserProfileLenses.name();
UserProfile profile = new UserProfile("u1", "Alice", "alice@example.com", 30, "Engineer");
// Successful modification
Maybe<UserProfile> updated = modifyMaybe(
nameLens,
name -> name.length() >= 2 ? Maybe.just(name.toUpperCase()) : Maybe.nothing(),
profile
);
// Maybe.just(UserProfile with name "ALICE")
// Failed modification
UserProfile shortName = new UserProfile("u2", "A", "a@example.com", 25, "Student");
Maybe<UserProfile> failed = modifyMaybe(
nameLens,
name -> name.length() >= 2 ? Maybe.just(name.toUpperCase()) : Maybe.nothing(),
shortName
);
// Maybe.nothing()
// Use result
String result = updated
.map(p -> "Updated: " + p.name())
.orElse("Update failed");
Lens<Product, String> skuLens = ProductLenses.sku();
// Only format SKU if it matches a pattern
Maybe<Product> formatted = modifyMaybe(
skuLens,
sku -> sku.matches("^[A-Z]{3}-\\d{4}$")
? Maybe.just(sku.toUpperCase())
: Maybe.nothing(), // Leave invalid SKUs unchanged
product
);
modifyEither — Fail-Fast Validation
public static <S, A, E> Either<E, S> modifyEither(
Lens<S, A> lens,
Function<A, Either<E, A>> modifier,
S source)
Apply a modification with validation. Returns Either.right(updated) if valid, Either.left(error) if invalid. Stops at first error.
Lens<UserProfile, Integer> ageLens = UserProfileLenses.age();
UserProfile profile = new UserProfile("u1", "Alice", "alice@example.com", 30, "Engineer");
// Valid modification
Either<String, UserProfile> updated = modifyEither(
ageLens,
age -> {
if (age < 0) return Either.left("Age cannot be negative");
if (age > 150) return Either.left("Age must be realistic");
return Either.right(age + 1); // Birthday!
},
profile
);
// Either.right(UserProfile with age 31)
// Invalid modification
UserProfile invalidAge = new UserProfile("u2", "Bob", "bob@example.com", 200, "Time traveller");
Either<String, UserProfile> failed = modifyEither(
ageLens,
age -> {
if (age < 0) return Either.left("Age cannot be negative");
if (age > 150) return Either.left("Age must be realistic");
return Either.right(age + 1);
},
invalidAge
);
// Either.left("Age must be realistic")
// Display result
String message = updated.fold(
error -> "❌ " + error,
user -> "✅ Updated age to " + user.age()
);
Use modifyEither for fail-fast validation:
- Single field updates where you want to stop at the first error
- API request validation (reject immediately if any field is invalid)
- Form submissions where you show the first error encountered
- Operations where continuing after an error doesn't make sense
modifyValidated — Validated Modifications
public static <S, A, E> Validated<E, S> modifyValidated(
Lens<S, A> lens,
Function<A, Validated<E, A>> modifier,
S source)
Like modifyEither, but returns Validated for consistency with error accumulation workflows.
Lens<UserProfile, String> emailLens = UserProfileLenses.email();
UserProfile profile = new UserProfile("u1", "Alice", "old@example.com", 30, "Engineer");
// Valid email format
Validated<String, UserProfile> updated = modifyValidated(
emailLens,
email -> {
if (!email.contains("@")) {
return Validated.invalid("Email must contain @");
}
if (!email.endsWith(".com") && !email.endsWith(".co.uk")) {
return Validated.invalid("Email must end with .com or .co.uk");
}
return Validated.valid(email.toLowerCase());
},
profile
);
// Validated.valid(UserProfile with email "old@example.com")
// Invalid email format
UserProfile badEmail = new UserProfile("u2", "Bob", "invalid-email", 25, "Student");
Validated<String, UserProfile> failed = modifyValidated(
emailLens,
email -> {
if (!email.contains("@")) {
return Validated.invalid("Email must contain @");
}
if (!email.endsWith(".com") && !email.endsWith(".co.uk")) {
return Validated.invalid("Email must end with .com or .co.uk");
}
return Validated.valid(email.toLowerCase());
},
badEmail
);
// Validated.invalid("Email must contain @")
For single field validation, modifyEither and modifyValidated behave identically (both fail fast). The difference matters when validating multiple fields—use Validated when you want to accumulate errors across fields.
modifyTry — Exception-Safe Modifications
public static <S, A> Try<S> modifyTry(
Lens<S, A> lens,
Function<A, Try<A>> modifier,
S source)
Apply a modification that might throw exceptions. Returns Try.success(updated) if successful, Try.failure(exception) if an exception occurred.
Lens<UserProfile, String> emailLens = UserProfileLenses.email();
UserProfile profile = new UserProfile("u1", "Alice", "alice@example.com", 30, "Engineer");
// Successful database update
Try<UserProfile> updated = modifyTry(
emailLens,
email -> Try.of(() -> updateEmailInDatabase(email)),
profile
);
// Try.success(UserProfile with updated email)
// Failed database update
UserProfile badEmail = new UserProfile("u2", "Bob", "fail@example.com", 25, "Student");
Try<UserProfile> failed = modifyTry(
emailLens,
email -> Try.of(() -> updateEmailInDatabase(email)),
badEmail
);
// Try.failure(RuntimeException: "Database connection failed")
// Handle result
updated.match(
user -> logger.info("Email updated: {}", user.email()),
error -> logger.error("Update failed", error)
);
// Update user's profile picture by uploading to S3
Lens<User, String> profilePictureLens = UserLenses.profilePictureUrl();
Try<User> result = modifyTry(
profilePictureLens,
oldUrl -> Try.of(() -> {
// Upload new image to S3 (might throw IOException, AmazonS3Exception)
String newUrl = s3Client.uploadImage(imageData);
// Delete old image if it exists
if (oldUrl != null && !oldUrl.isEmpty()) {
s3Client.deleteImage(oldUrl);
}
return newUrl;
}),
user
);
result.match(
updated -> sendSuccessResponse(updated),
error -> sendErrorResponse("Image upload failed: " + error.getMessage())
);
setIfValid — Conditional Updates
public static <S, A, E> Either<E, S> setIfValid(
Lens<S, A> lens,
Function<A, Either<E, A>> validator,
A newValue,
S source)
Set a new value only if it passes validation. Unlike modifyEither, you provide the new value directly rather than deriving it from the old value.
Lens<UserProfile, String> nameLens = UserProfileLenses.name();
UserProfile profile = new UserProfile("u1", "Alice", "alice@example.com", 30, "Engineer");
// Valid name format
Either<String, UserProfile> updated = setIfValid(
nameLens,
name -> {
if (name.length() < 2) {
return Either.left("Name must be at least 2 characters");
}
if (!name.matches("[A-Z][a-z]+")) {
return Either.left("Name must start with capital letter");
}
return Either.right(name);
},
"Robert",
profile
);
// Either.right(UserProfile with name "Robert")
// Invalid name format
Either<String, UserProfile> failed = setIfValid(
nameLens,
name -> {
if (name.length() < 2) {
return Either.left("Name must be at least 2 characters");
}
if (!name.matches("[A-Z][a-z]+")) {
return Either.left("Name must start with capital letter");
}
return Either.right(name);
},
"bob123",
profile
);
// Either.left("Name must start with capital letter")
Use setIfValid when:
- The new value comes from user input or external source
- You're not transforming the old value
- You want to validate before setting
Use modifyEither when:
- The new value is derived from the old value (e.g., incrementing, formatting)
- You're transforming the current value
Composing Lens Extensions
Lens extensions compose naturally with other optics operations:
Chaining Multiple Updates
UserProfile original = new UserProfile("u1", "alice", "ALICE@EXAMPLE.COM", 30, null);
Lens<UserProfile, String> nameLens = UserProfileLenses.name();
Lens<UserProfile, String> emailLens = UserProfileLenses.email();
// Chain multiple validations
Either<String, UserProfile> result = modifyEither(
nameLens,
name -> Either.right(capitalize(name)),
original
).flatMap(user ->
modifyEither(
emailLens,
email -> Either.right(email.toLowerCase()),
user
)
);
// Either.right(UserProfile with name "Alice", email "alice@example.com")
Nested Structure Updates
@GenerateLenses
record Address(String street, String city, String postcode) {}
@GenerateLenses
record User(String name, Address address) {}
Lens<User, Address> addressLens = UserLenses.address();
Lens<Address, String> postcodeLens = AddressLenses.postcode();
Lens<User, String> userPostcodeLens = addressLens.andThen(postcodeLens);
User user = new User("Alice", new Address("123 Main St", "London", "SW1A 1AA"));
// Validate and update nested field
Either<String, User> updated = modifyEither(
userPostcodeLens,
postcode -> validatePostcode(postcode),
user
);
Common Patterns
Pattern 1: Form Validation
Validating individual form fields with immediate feedback:
public Either<String, UserProfile> validateAndUpdateEmail(
UserProfile profile,
String newEmail
) {
Lens<UserProfile, String> emailLens = UserProfileLenses.email();
return modifyEither(
emailLens,
email -> {
if (email == null || email.isEmpty()) {
return Either.left("Email is required");
}
if (!email.contains("@")) {
return Either.left("Email must contain @");
}
if (!email.matches("^[A-Za-z0-9+_.-]+@[A-Za-z0-9.-]+$")) {
return Either.left("Email format is invalid");
}
return Either.right(email.toLowerCase());
},
profile
);
}
Pattern 2: Safe Field Access with Default
Safely accessing nullable fields and providing defaults:
Lens<UserProfile, String> bioLens = UserProfileLenses.bio();
String displayBio = getMaybe(bioLens, profile)
.orElse("No bio provided");
// Or with transformation
String formattedBio = getMaybe(bioLens, profile)
.map(bio -> bio.length() > 100 ? bio.substring(0, 100) + "..." : bio)
.orElse("No bio");
Pattern 3: Database Operations with Exception Handling
Performing database updates that might fail:
public Try<User> updateUserInDatabase(User user, String newEmail) {
Lens<User, String> emailLens = UserLenses.email();
return modifyTry(
emailLens,
email -> Try.of(() -> {
// Validate email is unique in database
if (emailExists(email)) {
throw new DuplicateEmailException("Email already in use");
}
// Update in database
database.updateEmail(user.id(), email);
return email;
}),
user
);
}
Before/After Comparison
Let's see a complete real-world scenario:
Scenario: User profile update form with validation.
public class UserProfileUpdater {
public UserProfile updateProfile(
UserProfile profile,
String newEmail,
Integer newAge,
String newBio
) throws ValidationException {
// Email validation
if (newEmail != null) {
if (!newEmail.contains("@")) {
throw new ValidationException("Invalid email");
}
profile = new UserProfile(
profile.userId(),
profile.name(),
newEmail.toLowerCase(),
profile.age(),
profile.bio()
);
}
// Age validation
if (newAge != null) {
if (newAge < 0 || newAge > 150) {
throw new ValidationException("Invalid age");
}
profile = new UserProfile(
profile.userId(),
profile.name(),
profile.email(),
newAge,
profile.bio()
);
}
// Bio update (optional)
if (newBio != null && newBio.length() > 10) {
profile = new UserProfile(
profile.userId(),
profile.name(),
profile.email(),
profile.age(),
newBio
);
}
return profile;
}
}
Problems:
- Repeated record construction (error-prone)
- Mixed validation and update logic
- Throws exceptions (not functional)
- Can't collect multiple errors
- Hard to test individual validations
public class UserProfileUpdater {
public Either<List<String>, UserProfile> updateProfile(
UserProfile profile,
String newEmail,
Integer newAge,
String newBio
) {
Lens<UserProfile, String> emailLens = UserProfileLenses.email();
Lens<UserProfile, Integer> ageLens = UserProfileLenses.age();
Lens<UserProfile, String> bioLens = UserProfileLenses.bio();
// Update email
Either<String, UserProfile> emailResult =
modifyEither(emailLens, this::validateEmail, profile);
// Update age
Either<String, UserProfile> ageResult =
emailResult.flatMap(p -> modifyEither(ageLens, this::validateAge, p));
// Update bio (optional)
Either<String, UserProfile> finalResult =
ageResult.flatMap(p -> modifyMaybe(bioLens, this::formatBio, p)
.map(Either::<String, UserProfile>right)
.orElse(Either.right(p)));
return finalResult.mapLeft(List::of);
}
private Either<String, String> validateEmail(String email) {
if (!email.contains("@")) {
return Either.left("Email must contain @");
}
return Either.right(email.toLowerCase());
}
private Either<String, Integer> validateAge(Integer age) {
if (age < 0 || age > 150) {
return Either.left("Age must be between 0 and 150");
}
return Either.right(age);
}
private Maybe<String> formatBio(String bio) {
return bio.length() > 10 ? Maybe.just(bio) : Maybe.nothing();
}
}
Benefits:
- Clean separation of concerns
- Functional error handling
- Each validation is testable in isolation
- Lenses handle immutable updates
- Clear data flow
Best Practices
Use getMaybe when accessing optional fields
Use modifyEither for fail-fast single field validation
Use modifyValidated for consistency with multi-field validation (error accumulation)
Use modifyTry for operations that throw exceptions (database, I/O, network)
Use setIfValid when setting user-provided values with validation
Your validation and modification functions should be pure:
// ✅ Pure validation
private Either<String, String> validateEmail(String email) {
if (!email.contains("@")) {
return Either.left("Invalid email");
}
return Either.right(email.toLowerCase());
}
// ❌ Impure validation (has side effects)
private Either<String, String> validateEmail(String email) {
logger.info("Validating email: {}", email); // Side effect!
if (!email.contains("@")) {
return Either.left("Invalid email");
}
return Either.right(email.toLowerCase());
}
Pure functions are easier to test, reason about, and compose.
Lens extensions handle null field values, but not null source objects:
UserProfile profile = null;
Maybe<String> bio = getMaybe(bioLens, profile); // NullPointerException!
// Wrap the source in Maybe first
Maybe<UserProfile> maybeProfile = Maybe.fromNullable(profile);
Maybe<String> safeBio = maybeProfile.flatMap(p -> getMaybe(bioLens, p));
Working Example
For a complete, runnable demonstration of all lens extension patterns, see:
This example demonstrates:
- All lens extension methods
- User profile management with validation
- Null-safe field access
- Exception-safe database operations
- Form validation patterns
- Real-world scenarios with before/after comparisons
Summary
Lens extensions provide:
🛡️ Safety Rails — Handle null values, validation, and exceptions without cluttering business logic
🎯 Separation of Concerns — Lenses define structure, validators define rules, core types carry results
🔄 Composability — Chain multiple validations and updates in a functional pipeline
📊 Error Handling — Choose fail-fast (Either) or exception-safe (Try) based on your needs
🧪 Testability — Validation logic is pure and easy to test in isolation
Next Steps
Now that you understand lens extensions for individual fields, learn how to process collections with validation and error handling:
Next: Traversal Extensions: Bulk Operations
Or return to the overview:
Back: Working with Core Types and Optics
Traversal Extensions: Bulk Operations with Error Handling

Traversals are optics that focus on zero or more elements in a structure—perfect for working with collections. But what happens when you need to validate all items in a list, accumulate errors, or selectively update elements?
Traditional traversal operations work well with clean, valid data. Real-world applications, however, require bulk validation, error accumulation, and partial updates. Traversal Extensions provide these capabilities whilst maintaining the elegance of functional composition.
Think of traversal extensions as quality control for production lines—they can inspect all items, reject the batch at the first defect (fail-fast), collect all defects for review (error accumulation), or fix what's fixable and flag the rest (selective modification).
The Problem: Bulk Operations Without Error Handling
Let's see the traditional approach to processing collections with validation:
public List<OrderItem> validateAndUpdatePrices(List<OrderItem> items) {
List<OrderItem> result = new ArrayList<>();
List<String> errors = new ArrayList<>();
for (OrderItem item : items) {
BigDecimal price = item.price();
// Validation
if (price.compareTo(BigDecimal.ZERO) < 0) {
errors.add("Invalid price for " + item.sku());
// Now what? Skip this item? Throw exception? Continue?
} else if (price.compareTo(new BigDecimal("10000")) > 0) {
errors.add("Price too high for " + item.sku());
} else {
// Apply discount
BigDecimal discounted = price.multiply(new BigDecimal("0.9"));
result.add(new OrderItem(
item.sku(),
item.name(),
discounted,
item.quantity(),
item.status()
));
}
}
if (!errors.isEmpty()) {
// What do we do with the errors?
// Throw exception and lose all progress?
// Log them and continue with partial results?
}
return result;
}
Problems:
- Validation and transformation logic intertwined
- Error handling strategy unclear
- Manual loop with mutable state
- Unclear what happens on partial failure
- Imperative, hard to test
public Validated<List<String>, List<OrderItem>> validateAndUpdatePrices(
List<OrderItem> items
) {
Lens<OrderItem, BigDecimal> priceLens = OrderItemLenses.price();
Traversal<List<OrderItem>, BigDecimal> allPrices =
Traversals.<OrderItem>forList().andThen(priceLens.asTraversal());
return modifyAllValidated(
allPrices,
price -> validateAndDiscount(price),
items
);
}
private Validated<String, BigDecimal> validateAndDiscount(BigDecimal price) {
if (price.compareTo(BigDecimal.ZERO) < 0) {
return Validated.invalid("Price cannot be negative");
}
if (price.compareTo(new BigDecimal("10000")) > 0) {
return Validated.invalid("Price exceeds maximum");
}
return Validated.valid(price.multiply(new BigDecimal("0.9")));
}
Clean separation: the traversal defines where (all prices), the validator defines what (validation rules), and Validated accumulates all errors or returns all results.
Available Traversal Extensions
Higher-Kinded-J provides traversal extensions in the TraversalExtensions utility class. All methods are static, designed for import with import static:
import static org.higherkindedj.optics.extensions.TraversalExtensions.*;
These extension methods are also available through the Fluent API, providing method chaining and better discoverability. For example, modifyAllEither(traversal, f, source) can also be written using OpticOps for a more fluent syntax.
Extraction Methods
getAllMaybe — Extract All Values
public static <S, A> Maybe<List<A>> getAllMaybe(Traversal<S, A> traversal, S source)
Extract all focused values into a list. Returns Maybe.just(values) if any elements exist, Maybe.nothing() for empty collections.
List<OrderItem> items = List.of(
new OrderItem("SKU001", "Laptop", new BigDecimal("999.99"), 1, "pending"),
new OrderItem("SKU002", "Mouse", new BigDecimal("29.99"), 2, "pending")
);
Lens<OrderItem, BigDecimal> priceLens = OrderItemLenses.price();
Traversal<List<OrderItem>, BigDecimal> allPrices =
Traversals.<OrderItem>forList().andThen(priceLens.asTraversal());
Maybe<List<BigDecimal>> prices = getAllMaybe(allPrices, items);
// Maybe.just([999.99, 29.99])
List<OrderItem> empty = List.of();
Maybe<List<BigDecimal>> noPrices = getAllMaybe(allPrices, empty);
// Maybe.nothing()
// Calculate total
BigDecimal total = prices
.map(list -> list.stream().reduce(BigDecimal.ZERO, BigDecimal::add))
.orElse(BigDecimal.ZERO);
Bulk Modification Methods
modifyAllMaybe — All-or-Nothing Modifications
public static <S, A> Maybe<S> modifyAllMaybe(
Traversal<S, A> traversal,
Function<A, Maybe<A>> modifier,
S source)
Apply a modification to all elements. Returns Maybe.just(updated) if all modifications succeed, Maybe.nothing() if any fail. This is an atomic operation—either everything updates or nothing does.
List<OrderItem> items = List.of(
new OrderItem("SKU001", "Laptop", new BigDecimal("100.00"), 1, "pending"),
new OrderItem("SKU002", "Mouse", new BigDecimal("20.00"), 2, "pending"),
new OrderItem("SKU003", "Keyboard", new BigDecimal("50.00"), 1, "pending")
);
Lens<OrderItem, BigDecimal> priceLens = OrderItemLenses.price();
Traversal<List<OrderItem>, BigDecimal> allPrices =
Traversals.<OrderItem>forList().andThen(priceLens.asTraversal());
// Successful: all prices ≥ £10
Maybe<List<OrderItem>> updated = modifyAllMaybe(
allPrices,
price -> price.compareTo(new BigDecimal("10")) >= 0
? Maybe.just(price.multiply(new BigDecimal("1.1"))) // 10% increase
: Maybe.nothing(),
items
);
// Maybe.just([updated items with 10% price increase])
// Failed: one price < £10
List<OrderItem> withLowPrice = List.of(
new OrderItem("SKU001", "Laptop", new BigDecimal("100.00"), 1, "pending"),
new OrderItem("SKU002", "Cheap Item", new BigDecimal("5.00"), 2, "pending")
);
Maybe<List<OrderItem>> failed = modifyAllMaybe(
allPrices,
price -> price.compareTo(new BigDecimal("10")) >= 0
? Maybe.just(price.multiply(new BigDecimal("1.1")))
: Maybe.nothing(),
withLowPrice
);
// Maybe.nothing() - entire update rolled back
Use modifyAllMaybe for atomic updates where:
- All modifications must succeed or none should apply
- Partial updates would leave data in an inconsistent state
- You want "all-or-nothing" semantics
Example: Applying currency conversion to all prices—if the exchange rate service fails for one item, you don't want some prices converted and others not.
modifyAllEither — Fail-Fast Validation
public static <S, A, E> Either<E, S> modifyAllEither(
Traversal<S, A> traversal,
Function<A, Either<E, A>> modifier,
S source)
Apply a modification with validation. Returns Either.right(updated) if all validations pass, Either.left(firstError) if any fail. Stops at the first error (fail-fast).
List<OrderItem> items = List.of(
new OrderItem("SKU001", "Laptop", new BigDecimal("999.99"), 1, "pending"),
new OrderItem("SKU002", "Mouse", new BigDecimal("-10.00"), 2, "pending"), // Invalid!
new OrderItem("SKU003", "Keyboard", new BigDecimal("79.99"), 1, "pending")
);
Lens<OrderItem, BigDecimal> priceLens = OrderItemLenses.price();
Traversal<List<OrderItem>, BigDecimal> allPrices =
Traversals.<OrderItem>forList().andThen(priceLens.asTraversal());
// Fail-fast: stops at first invalid price
Either<String, List<OrderItem>> result = modifyAllEither(
allPrices,
price -> {
if (price.compareTo(BigDecimal.ZERO) < 0) {
return Either.left("Price cannot be negative");
}
if (price.compareTo(new BigDecimal("10000")) > 0) {
return Either.left("Price exceeds maximum");
}
return Either.right(price);
},
items
);
// Either.left("Price cannot be negative")
// Stopped at SKU002, didn't check SKU003
result.match(
error -> System.out.println("❌ Validation failed: " + error),
updated -> System.out.println("✅ All items valid")
);
Use modifyAllEither for fail-fast validation where:
- You want to stop immediately at the first error
- Subsequent validations depend on earlier ones passing
- You want efficient rejection of invalid data
- The first error is sufficient feedback
Example: API request validation—reject the request immediately if any field is invalid.
modifyAllValidated — Error Accumulation
public static <S, A, E> Validated<List<E>, S> modifyAllValidated(
Traversal<S, A> traversal,
Function<A, Validated<E, A>> modifier,
S source)
Apply a modification with validation. Returns Validated.valid(updated) if all validations pass, Validated.invalid(allErrors) if any fail. Collects all errors (error accumulation).
List<OrderItem> items = List.of(
new OrderItem("SKU001", "Laptop", new BigDecimal("-100.00"), 1, "pending"), // Error 1
new OrderItem("SKU002", "Mouse", new BigDecimal("29.99"), -5, "pending"),
new OrderItem("SKU003", "Keyboard", new BigDecimal("-50.00"), 1, "pending") // Error 2
);
Lens<OrderItem, BigDecimal> priceLens = OrderItemLenses.price();
Traversal<List<OrderItem>, BigDecimal> allPrices =
Traversals.<OrderItem>forList().andThen(priceLens.asTraversal());
// Accumulate ALL errors
Validated<List<String>, List<OrderItem>> result = modifyAllValidated(
allPrices,
price -> {
if (price.compareTo(BigDecimal.ZERO) < 0) {
return Validated.invalid("Price cannot be negative: " + price);
}
if (price.compareTo(new BigDecimal("10000")) > 0) {
return Validated.invalid("Price exceeds maximum: " + price);
}
return Validated.valid(price);
},
items
);
// Validated.invalid(["Price cannot be negative: -100.00", "Price cannot be negative: -50.00"])
// Checked ALL items and collected ALL errors
result.match(
errors -> {
System.out.println("❌ Validation failed with " + errors.size() + " errors:");
errors.forEach(err -> System.out.println(" • " + err));
},
updated -> System.out.println("✅ All items valid")
);
Use modifyAllValidated for error accumulation where:
- You want to collect all errors, not just the first one
- Better user experience (show all problems at once)
- Form validation where users need to fix all fields
- Batch processing where you want a complete error report
Example: User registration form—show all validation errors (invalid email, weak password, missing fields) rather than one at a time.
Fail-Fast (modifyAllEither):
// API request validation - reject immediately
Either<String, List<Item>> result = modifyAllEither(
allPrices,
price -> validatePrice(price),
items
);
return result.fold(
error -> ResponseEntity.badRequest().body(error),
valid -> ResponseEntity.ok(processOrder(valid))
);
Error Accumulation (modifyAllValidated):
// Form validation - show all errors
Validated<List<String>, List<Item>> result = modifyAllValidated(
allPrices,
price -> validatePrice(price),
items
);
return result.fold(
errors -> showFormErrors(errors), // Display ALL errors to user
valid -> submitForm(valid)
);
modifyWherePossible — Selective Modification
public static <S, A> S modifyWherePossible(
Traversal<S, A> traversal,
Function<A, Maybe<A>> modifier,
S source)
Apply a modification selectively. Modifies elements where the function returns Maybe.just(value), leaves others unchanged. This is a best-effort operation—always succeeds, modifying what it can.
List<OrderItem> items = List.of(
new OrderItem("SKU001", "Laptop", new BigDecimal("999.99"), 1, "pending"),
new OrderItem("SKU002", "Mouse", new BigDecimal("29.99"), 2, "shipped"), // Don't modify
new OrderItem("SKU003", "Keyboard", new BigDecimal("79.99"), 1, "pending")
);
Lens<OrderItem, String> statusLens = OrderItemLenses.status();
Traversal<List<OrderItem>, String> allStatuses =
Traversals.<OrderItem>forList().andThen(statusLens.asTraversal());
// Update only "pending" items
List<OrderItem> updated = modifyWherePossible(
allStatuses,
status -> status.equals("pending")
? Maybe.just("processing")
: Maybe.nothing(), // Leave non-pending unchanged
items
);
// [
// OrderItem(..., "processing"), // SKU001 updated
// OrderItem(..., "shipped"), // SKU002 unchanged
// OrderItem(..., "processing") // SKU003 updated
// ]
System.out.println("Updated statuses:");
updated.forEach(item ->
System.out.println(" " + item.sku() + ": " + item.status())
);
// Apply 10% discount to items over £100 (premium items only)
Lens<OrderItem, BigDecimal> priceLens = OrderItemLenses.price();
Traversal<List<OrderItem>, BigDecimal> allPrices =
Traversals.<OrderItem>forList().andThen(priceLens.asTraversal());
List<OrderItem> discounted = modifyWherePossible(
allPrices,
price -> price.compareTo(new BigDecimal("100")) > 0
? Maybe.just(price.multiply(new BigDecimal("0.9")))
: Maybe.nothing(), // Leave cheaper items at full price
items
);
Use modifyWherePossible for selective updates where:
- Only some elements should be modified based on a condition
- Partial updates are acceptable and expected
- You want to "fix what's fixable"
- The operation should never fail
Example: Status transitions—update items in "pending" status to "processing", but leave "shipped" items unchanged.
Analysis Methods
countValid — Count Passing Validation
public static <S, A, E> int countValid(
Traversal<S, A> traversal,
Function<A, Either<E, A>> validator,
S source)
Count how many elements pass validation without modifying anything.
List<OrderItem> items = List.of(
new OrderItem("SKU001", "Laptop", new BigDecimal("999.99"), 1, "pending"),
new OrderItem("SKU002", "Mouse", new BigDecimal("-10.00"), 2, "pending"), // Invalid
new OrderItem("SKU003", "Keyboard", new BigDecimal("79.99"), 1, "pending"),
new OrderItem("SKU004", "Monitor", new BigDecimal("-50.00"), 1, "pending") // Invalid
);
Lens<OrderItem, BigDecimal> priceLens = OrderItemLenses.price();
Traversal<List<OrderItem>, BigDecimal> allPrices =
Traversals.<OrderItem>forList().andThen(priceLens.asTraversal());
int validCount = countValid(
allPrices,
price -> price.compareTo(BigDecimal.ZERO) >= 0
? Either.right(price)
: Either.left("Negative price"),
items
);
// 2
System.out.println("Valid items: " + validCount + " out of " + items.size());
System.out.println("Invalid items: " + (items.size() - validCount));
Use countValid for reporting and metrics where:
- You need to know how many items are valid without modifying them
- Generating validation reports or dashboards
- Pre-checking before bulk operations
- Displaying progress to users
Example: Show user "3 out of 5 addresses are valid" before allowing checkout.
collectErrors — Gather Validation Failures
public static <S, A, E> List<E> collectErrors(
Traversal<S, A> traversal,
Function<A, Either<E, A>> validator,
S source)
Collect all validation errors without modifying anything. Returns empty list if all valid.
List<OrderItem> items = List.of(
new OrderItem("SKU001", "Laptop", new BigDecimal("999.99"), 1, "pending"),
new OrderItem("SKU002", "Mouse", new BigDecimal("-10.00"), 2, "pending"),
new OrderItem("SKU003", "Keyboard", new BigDecimal("79.99"), 1, "pending"),
new OrderItem("SKU004", "Monitor", new BigDecimal("-50.00"), -1, "pending")
);
Lens<OrderItem, BigDecimal> priceLens = OrderItemLenses.price();
Traversal<List<OrderItem>, BigDecimal> allPrices =
Traversals.<OrderItem>forList().andThen(priceLens.asTraversal());
List<String> errors = collectErrors(
allPrices,
price -> price.compareTo(BigDecimal.ZERO) >= 0
? Either.right(price)
: Either.left("Negative price: " + price),
items
);
// ["Negative price: -10.00", "Negative price: -50.00"]
if (errors.isEmpty()) {
System.out.println("✅ All prices valid");
} else {
System.out.println("❌ Found " + errors.size() + " invalid prices:");
errors.forEach(err -> System.out.println(" • " + err));
}
Use collectErrors for error reporting where:
- You want a list of all problems without modifying data
- Generating validation reports
- Pre-flight checks before expensive operations
- Displaying errors to users
Example: Validate uploaded CSV file and show all errors before importing.
Complete Real-World Example
Let's see a complete order validation pipeline combining multiple traversal extensions:
public sealed interface ValidationResult permits OrderApproved, OrderRejected {}
record OrderApproved(Order order) implements ValidationResult {}
record OrderRejected(List<String> errors) implements ValidationResult {}
public ValidationResult validateOrder(Order order) {
Lens<OrderItem, BigDecimal> priceLens = OrderItemLenses.price();
Lens<OrderItem, Integer> quantityLens = OrderItemLenses.quantity();
Traversal<List<OrderItem>, BigDecimal> allPrices =
Traversals.<OrderItem>forList().andThen(priceLens.asTraversal());
Traversal<List<OrderItem>, Integer> allQuantities =
Traversals.<OrderItem>forList().andThen(quantityLens.asTraversal());
// Step 1: Validate all prices (accumulate errors)
List<String> priceErrors = collectErrors(
allPrices,
price -> validatePrice(price),
order.items()
);
// Step 2: Validate all quantities (accumulate errors)
List<String> quantityErrors = collectErrors(
allQuantities,
qty -> validateQuantity(qty),
order.items()
);
// Step 3: Combine all errors
List<String> allErrors = Stream.of(priceErrors, quantityErrors)
.flatMap(List::stream)
.toList();
if (!allErrors.isEmpty()) {
return new OrderRejected(allErrors);
}
// Step 4: Apply discounts to valid items
List<OrderItem> discounted = modifyWherePossible(
allPrices,
price -> price.compareTo(new BigDecimal("100")) > 0
? Maybe.just(price.multiply(new BigDecimal("0.9")))
: Maybe.nothing(),
order.items()
);
Order finalOrder = new Order(
order.orderId(),
discounted,
order.customerEmail()
);
return new OrderApproved(finalOrder);
}
private Either<String, BigDecimal> validatePrice(BigDecimal price) {
if (price.compareTo(BigDecimal.ZERO) < 0) {
return Either.left("Price cannot be negative");
}
if (price.compareTo(new BigDecimal("10000")) > 0) {
return Either.left("Price exceeds maximum");
}
return Either.right(price);
}
private Either<String, Integer> validateQuantity(Integer qty) {
if (qty <= 0) {
return Either.left("Quantity must be positive");
}
if (qty > 100) {
return Either.left("Quantity exceeds maximum");
}
return Either.right(qty);
}
// Usage
ValidationResult result = validateOrder(order);
switch (result) {
case OrderApproved approved -> processOrder(approved.order());
case OrderRejected rejected -> displayErrors(rejected.errors());
}
Before/After Comparison
Scenario: Validating and updating prices for all items in a shopping cart.
public class CartValidator {
public ValidationResult validateCart(List<CartItem> items) {
List<String> errors = new ArrayList<>();
List<CartItem> validated = new ArrayList<>();
boolean hasErrors = false;
for (int i = 0; i < items.size(); i++) {
CartItem item = items.get(i);
BigDecimal price = item.price();
// Validate price
if (price == null) {
errors.add("Item " + i + ": Price is required");
hasErrors = true;
continue;
}
if (price.compareTo(BigDecimal.ZERO) < 0) {
errors.add("Item " + i + ": Price cannot be negative");
hasErrors = true;
continue;
}
if (price.compareTo(new BigDecimal("10000")) > 0) {
errors.add("Item " + i + ": Price too high");
hasErrors = true;
continue;
}
// Apply tax
BigDecimal withTax = price.multiply(new BigDecimal("1.2"));
CartItem updated = new CartItem(
item.id(),
item.name(),
withTax,
item.quantity()
);
validated.add(updated);
}
if (hasErrors) {
return new ValidationFailure(errors);
}
return new ValidationSuccess(validated);
}
}
Problems:
- Manual loop with index tracking
- Mutable state (
errors,validated,hasErrors) - Validation and transformation intertwined
- Hard to test validation logic separately
- Imperative, hard to reason about
public class CartValidator {
public Validated<List<String>, List<CartItem>> validateCart(List<CartItem> items) {
Lens<CartItem, BigDecimal> priceLens = CartItemLenses.price();
Traversal<List<CartItem>, BigDecimal> allPrices =
Traversals.<CartItem>forList().andThen(priceLens.asTraversal());
return modifyAllValidated(
allPrices,
price -> validateAndApplyTax(price),
items
);
}
private Validated<String, BigDecimal> validateAndApplyTax(BigDecimal price) {
if (price == null) {
return Validated.invalid("Price is required");
}
if (price.compareTo(BigDecimal.ZERO) < 0) {
return Validated.invalid("Price cannot be negative");
}
if (price.compareTo(new BigDecimal("10000")) > 0) {
return Validated.invalid("Price too high");
}
return Validated.valid(price.multiply(new BigDecimal("1.2")));
}
}
Benefits:
- Declarative, functional style
- No mutable state
- Validation logic is pure and testable
- Automatic error accumulation
- Clear separation of concerns
- Composable with other operations
Best Practices
Use modifyAllEither for fail-fast validation:
- API requests (reject immediately)
- Critical validations (stop on first error)
- When errors are independent
Use modifyAllValidated for error accumulation:
- Form validation (show all errors)
- Batch processing (complete error report)
- Better user experience
Use modifyWherePossible for selective updates:
- Conditional modifications
- Best-effort operations
- Status transitions
Your validation functions should be pure (no side effects):
// ✅ Pure validator
private Validated<String, BigDecimal> validatePrice(BigDecimal price) {
if (price.compareTo(BigDecimal.ZERO) < 0) {
return Validated.invalid("Price cannot be negative");
}
return Validated.valid(price);
}
// ❌ Impure validator (has side effects)
private Validated<String, BigDecimal> validatePrice(BigDecimal price) {
logger.info("Validating price: {}", price); // Side effect!
database.recordValidation(price); // Side effect!
if (price.compareTo(BigDecimal.ZERO) < 0) {
return Validated.invalid("Price cannot be negative");
}
return Validated.valid(price);
}
Pure validators are easier to test, compose, and reason about.
When using modifyAllValidated, errors are accumulated in the order elements are traversed:
List<OrderItem> items = List.of(item1, item2, item3); // item1 and item3 have errors
Validated<List<String>, List<OrderItem>> result = modifyAllValidated(...);
// Errors will be: [error from item1, error from item3]
// Order is preserved
This is usually what you want, but be aware if error order matters for your use case.
Use countValid and collectErrors for pre-flight checks:
// Check before expensive operation
List<String> errors = collectErrors(allPrices, this::validatePrice, items);
if (!errors.isEmpty()) {
logger.warn("Validation would fail with {} errors", errors.size());
return Either.left("Pre-flight check failed");
}
// Proceed with expensive operation
return modifyAllEither(allPrices, this::applyComplexTransformation, items);
Working Example
For a complete, runnable demonstration of all traversal extension patterns, see:
This example demonstrates:
- All traversal extension methods
- Fail-fast vs error accumulation strategies
- Selective modification patterns
- Counting and error collection
- Complete order validation pipeline
- Real-world e-commerce scenarios
Summary
Traversal extensions provide:
🗺️ Bulk Operations — Process entire collections with validation and error handling
📊 Error Strategies — Choose fail-fast (Either) or error accumulation (Validated)
🎯 Selective Updates — Modify only elements that meet criteria
📈 Analysis Tools — Count valid items and collect errors without modification
🔄 Composability — Chain with lenses and other optics for complex workflows
🧪 Testability — Pure validation functions are easy to test in isolation
Next Steps
You've now learned all three core type integration approaches! Return to the overview to see how they work together:
Back: Working with Core Types and Optics
Or explore complete integration patterns:
See Also: Integration Patterns Example — Complete e-commerce workflow combining all approaches
A Blog on Types and Functional Patterns
This blog series provides excellent background reading whilst you're learning the techniques used in Higher-Kinded-J. Each post builds foundational knowledge that will deepen your understanding of functional programming patterns in Java.
This web series explores the foundational ideas that inspired Higher-Kinded-J's development.
In this post, we explore the power of Algebraic Data Types (ADT) with Pattern Matching in Java. We look at how they help us model complex business domains and how using them together gives improvements on the traditional Visitor Pattern.
In this post, we look at Variance in Generics and how it is handled in Java and Scala. We consider use-site and declaration-site approaches and the trade-offs of erasure. Finally, we take a look at Phantom and Existential types and how they can enhance the capabilities of the type system when it comes to modelling.
In this post, we will see how Intersection types help us better model type constraints, promoting reuse, and how Union types increase code flexibility. We will compare and contrast approaches and how to use them in the latest Java and Scala.
Learn about how Functors and Monads provide patterns to write cleaner, more composable, and robust code that helps us deal with operations like handling nulls, managing errors and sequencing asynchronous actions.
In this post, we will see how Higher Kinded Types can help increase the flexibility of our code and reduce duplication.
In this post, we will see how Thunks and Trampolines can help solve problems by converting deep stack-based recursion into heap-based iteration, helping to prevent StackOverflowErrors.
Glossary of Functional Programming Terms
- Key terminology used throughout Higher-Kinded-J documentation
- Explanations tailored for mid-level Java developers
- Practical examples to reinforce understanding
- Quick reference for concepts you encounter whilst coding
This glossary provides clear, practical explanations of functional programming and Higher-Kinded-J concepts. Each term includes Java-friendly explanations and examples where helpful.
Type System Concepts
Contravariant
Definition: A type parameter is contravariant when it appears in an "input" or "consumer" position. If A is a subtype of B, then F<B> can be treated as a subtype of F<A> when accepting values (note the direction reversal!).
Java Analogy: Think of ? super T in Java generics—this is contravariant. Also, function parameters are contravariant.
Example:
// Contravariant behaviour in Java (function parameters)
// A function accepting Object can be used where one accepting String is expected
Comparator<Object> objectComparator = (a, b) -> a.toString().compareTo(b.toString());
Comparator<String> stringComparator = objectComparator; // ✅ Valid - contravariance in action
// Note: Java's Consumer<T> is invariant, so Consumer<Object> ≠ Consumer<String>
// But function *parameters* are naturally contravariant
// In Higher-Kinded-J: Profunctor's first parameter is contravariant
Profunctor<FunctionKind.Witness> prof = FunctionProfunctor.INSTANCE;
Function<String, Integer> stringLength = String::length;
// lmap is contravariant - we pre-process the INPUT
Kind2<FunctionKind.Witness, Integer, Integer> intLength =
prof.lmap(Object::toString, FUNCTION.widen(stringLength));
// Now accepts Integer input by converting it to String first
Think Of It As: "Values flow INTO the container" - you're consuming/accepting data.
Important: The direction is reversed! A function that accepts Object is more flexible than one that accepts only String, so Function<Object, R> is a "subtype" of Function<String, R> in terms of what it can handle.
Where You'll See It:
- The first parameter of Profunctor (input side)
- Function parameters
- Consumer types
Covariant
Definition: A type parameter is covariant when it appears in an "output" or "producer" position. If A is a subtype of B, then F<A> can be treated as a subtype of F<B> when reading values.
Java Analogy: Think of ? extends T in Java generics—this is covariant.
Example:
// Covariant behaviour in Java collections (read-only)
List<? extends Number> numbers = new ArrayList<Integer>();
Number n = numbers.get(0); // ✅ Safe to read out as Number
// In Higher-Kinded-J: Functor is covariant in its type parameter
Functor<ListKind.Witness> functor = ListFunctor.INSTANCE;
Kind<ListKind.Witness, Integer> ints = LIST.widen(List.of(1, 2, 3));
Kind<ListKind.Witness, String> strings = functor.map(Object::toString, ints);
// Integer -> String transformation (output direction)
Think Of It As: "Values flow OUT of the container" - you're producing/reading data.
Where You'll See It:
- Functor's type parameter (transforms outputs)
- Bifunctor's both parameters (both are outputs)
- The second parameter of Profunctor (output side)
- Return types of functions
Invariant
Definition: A type parameter is invariant when it appears in both input and output positions, or when the type doesn't allow any subtype substitution.
Java Analogy: Most mutable collections in Java are invariant—List<Integer> is not a subtype of List<Number>.
Example:
// Invariant behaviour in Java
List<Integer> ints = new ArrayList<>();
List<Number> nums = ints; // ❌ Compilation error!
// Not allowed because:
// - You could read Number (covariant)
// - You could write Number (contravariant)
// Both directions would violate type safety with mutable collections
// In Higher-Kinded-J: MonadError's error type is typically invariant
MonadError<EitherKind.Witness<String>, String> monadError = EitherMonadError.instance();
// The String error type is fixed—you can't substitute it with Object or CharSequence
Think Of It As: "Locked to exactly this type" - no flexibility in either direction.
Where You'll See It:
- Mutable collections
- Types used in both input and output positions
- Type parameters that don't participate in transformation operations
Variance Summary Table
| Variance | Direction | Java Analogy | Example Type Class | Intuition |
|---|---|---|---|---|
| Covariant | Output/Producer | ? extends T | Functor, Applicative, Monad | "Reading out" |
| Contravariant | Input/Consumer | ? super T | Profunctor (first param) | "Writing in" (reversed) |
| Invariant | Neither/Both | No wildcards | Monad error type | "Exact match required" |
Higher-Kinded Type Simulation
Defunctionalisation
Definition: A technique for simulating higher-kinded types in languages that don't natively support them. Instead of passing type constructors as parameters, we represent them with marker types (witnesses) and use these as ordinary type parameters.
The Problem It Solves: Java's type system cannot parametrise over type constructors. You cannot write <F<_>> in Java to mean "any container type F". Defunctionalisation works around this by using witness types to represent type constructors.
Example:
// ❌ What we'd like to write but can't in Java:
public <F<_>, A, B> F<B> map(Function<A, B> f, F<A> fa) { ... }
// ✅ What we write using defunctionalisation:
public <F, A, B> Kind<F, B> map(Function<A, B> f, Kind<F, A> fa) { ... }
// Where F is a witness type like OptionalKind.Witness or ListKind.Witness
How It Works:
- Define a marker interface (witness type) for each type constructor (e.g.,
ListKind.WitnessforList) - Use
Kind<F, A>whereFis the witness andAis the type parameter - Provide helper methods to convert between concrete types and their
Kindrepresentations
Where You'll See It: Throughout the Higher-Kinded-J library - it's the foundation of the entire HKT simulation.
Related: Core Concepts
Higher-Kinded Type (HKT)
Definition: A type that abstracts over type constructors. In languages with HKT support, you can write generic code that works with any "container" type like List, Optional, or CompletableFuture without knowing which one at compile time.
Java Analogy: Regular generics let you abstract over types (<T>). Higher-kinded types let you abstract over type constructors (<F<_>>).
Example:
// Regular generics (abstracting over types):
public <T> T identity(T value) { return value; }
// Higher-kinded types (abstracting over type constructors):
public <F> Kind<F, Integer> increment(Functor<F> functor, Kind<F, Integer> fa) {
return functor.map(x -> x + 1, fa);
}
// Works with any Functor:
increment(OptionalFunctor.INSTANCE, OPTIONAL.widen(Optional.of(5))); // Optional[6]
increment(ListFunctor.INSTANCE, LIST.widen(List.of(1, 2, 3))); // [2, 3, 4]
Why It Matters: Enables writing truly generic, reusable functional code that works across different container types.
Related: HKT Introduction
Kind
Definition: The core interface in Higher-Kinded-J that simulates higher-kinded types. Kind<F, A> represents a type constructor F applied to a type A.
Structure:
Kind<F, A>- Single type parameter (e.g.,List<A>,Optional<A>)Kind2<F, A, B>- Two type parameters (e.g.,Either<A, B>,Function<A, B>)
Example:
// Standard Java types and their Kind representations:
Optional<String> ≈ Kind<OptionalKind.Witness, String>
List<Integer> ≈ Kind<ListKind.Witness, Integer>
Either<String, Integer> ≈ Kind2<EitherKind2.Witness, String, Integer>
Function<String, Integer> ≈ Kind2<FunctionKind.Witness, String, Integer>
// Converting between representations:
Optional<String> opt = Optional.of("hello");
Kind<OptionalKind.Witness, String> kindOpt = OPTIONAL.widen(opt);
Optional<String> backToOpt = OPTIONAL.narrow(kindOpt);
Think Of It As: A wrapper that allows Java's type system to work with type constructors generically.
Note on Either: Either has two witness types depending on usage:
EitherKind.Witness<L>forKind<EitherKind.Witness<L>, R>- used with Functor/Monad (right-biased)EitherKind2.WitnessforKind2<EitherKind2.Witness, L, R>- used with Bifunctor (both sides)
Related: Core Concepts
Type Constructor
Definition: A type that takes one or more type parameters to produce a concrete type. In other words, it's a "type function" that constructs types.
Examples:
// List is a type constructor
List // Not a complete type (needs a parameter)
List<T> // Type constructor applied to parameter T
List<String> // Concrete type
// Either is a type constructor with two parameters
Either // Not a complete type
Either<L, R> // Type constructor applied to parameters L and R
Either<String, Integer> // Concrete type
// Optional is a type constructor
Optional // Not a complete type
Optional<T> // Type constructor applied to parameter T
Optional<String> // Concrete type
Notation: Often written with an underscore to show the "hole": List<_>, Either<String, _>, Optional<_>
Why It Matters: Type constructors are what we abstract over with higher-kinded types. Understanding them is key to understanding HKTs.
Witness Type
Definition: A marker type used to represent a type constructor in the defunctionalisation pattern. Each type constructor has a corresponding witness type.
Examples:
// List type constructor → ListKind.Witness
public interface ListKind<A> extends Kind<ListKind.Witness, A> {
final class Witness { private Witness() {} }
}
// Optional type constructor → OptionalKind.Witness
public interface OptionalKind<A> extends Kind<OptionalKind.Witness, A> {
final class Witness { private Witness() {} }
}
// Either type constructor → EitherKind.Witness<L>
public interface EitherKind<L, R> extends Kind2<EitherKind.Witness<L>, L, R> {
final class Witness<L> { private Witness() {} }
}
Usage:
// The Witness type is used as the F parameter:
Functor<ListKind.Witness> listFunctor = ListFunctor.INSTANCE;
Functor<OptionalKind.Witness> optionalFunctor = OptionalFunctor.INSTANCE;
MonadError<EitherKind.Witness<String>, String> eitherMonad = EitherMonadError.instance();
Think Of It As: A compile-time tag that identifies which type constructor we're working with.
Related: Core Concepts
Phantom Type
Definition: A type parameter that appears in a type signature but has no corresponding runtime representation—it exists purely for compile-time type safety and doesn't store any actual data of that type.
Key Characteristics:
- Present in the type signature for type-level information
- Never instantiated or stored at runtime
- Used for type-safe APIs without runtime overhead
- Enables compile-time guarantees whilst maintaining efficiency
Example:
// Const<C, A> uses A as a phantom type
Const<String, Integer> stringConst = new Const<>("hello");
// The Integer type parameter is phantom - no Integer is stored!
String value = stringConst.value(); // "hello"
// Mapping over the phantom type changes the signature but not the value
Const<String, Double> doubleConst = stringConst.mapSecond(i -> i * 2.0);
System.out.println(doubleConst.value()); // Still "hello" (unchanged!)
Common Use Cases:
- State tracking at compile time: Phantom types in state machines (e.g.,
DatabaseConnection<Closed>vsDatabaseConnection<Open>) - Units of measure: Tracking units without runtime overhead (e.g.,
Measurement<Metres>vsMeasurement<Feet>) - Const type: The second type parameter in
Const<C, A>is phantom, enabling fold and getter patterns - Type-safe builders: Ensuring build steps are called in the correct order
Real-World Example:
// State machine with phantom types
class FileHandle<State> {
private File file;
// Only available when Closed
FileHandle<Open> open() { ... }
}
class Open {}
class Closed {}
// Type-safe at compile time:
FileHandle<Closed> closed = new FileHandle<>();
FileHandle<Open> opened = closed.open(); // ✅ Allowed
// opened.open(); // ❌ Compile error - already open!
Benefits:
- Zero runtime cost - no additional memory or processing
- Compile-time safety - prevents incorrect API usage
- Self-documenting APIs - type signature conveys intent
- Enables advanced patterns like GADTs (Generalised Algebraic Data Types)
Where You'll See It:
Const<C, A>- theAparameter is phantom- Witness types in HKT encoding (though serving a different purpose)
- State machines and protocol enforcement
- Type-level programming patterns
Related: Const Type Documentation, Witness Type
Functional Type Classes
Applicative
Definition: A type class that extends Functor with the ability to lift pure values into a context and combine multiple independent computations.
Core Operations:
of(A value)- Lift a pure value into the contextap(Kind<F, Function<A,B>> ff, Kind<F, A> fa)- Apply a wrapped function to a wrapped valuemap2,map3, etc. - Combine multiple wrapped values
Example:
Applicative<OptionalKind.Witness> app = OptionalApplicative.INSTANCE;
// Lift pure values
Kind<OptionalKind.Witness, Integer> five = app.of(5); // Optional[5]
// Combine independent values
Kind<OptionalKind.Witness, String> result = app.map2(
app.of("Hello"),
app.of("World"),
(a, b) -> a + " " + b
); // Optional["Hello World"]
When To Use: Combining multiple independent effects (form validation, parallel computations, configuration assembly).
Related: Applicative Documentation
Bifunctor
Definition: A type class for types with two covariant parameters, allowing transformation of both sides independently or simultaneously.
Core Operations:
bimap(Function<A,C> f, Function<B,D> g, Kind2<F,A,B> fab)- Transform both parametersfirst(Function<A,C> f, Kind2<F,A,B> fab)- Transform only the first parametersecond(Function<B,D> g, Kind2<F,A,B> fab)- Transform only the second parameter
Example:
Bifunctor<EitherKind.Witness> bifunctor = EitherBifunctor.INSTANCE;
Either<String, Integer> either = Either.right(42);
Kind2<EitherKind.Witness, String, Integer> kindEither = EITHER.widen(either);
// Transform both sides
Kind2<EitherKind.Witness, Integer, String> transformed =
bifunctor.bimap(String::length, Object::toString, kindEither);
// Right("42")
When To Use: Transforming error and success channels, working with pairs/tuples, API format conversion.
Related: Bifunctor Documentation
Functor
Definition: The most basic type class for types that can be "mapped over". Allows transforming values inside a context without changing the context structure.
Core Operation:
map(Function<A,B> f, Kind<F,A> fa)- Apply a function to the wrapped value
Example:
Functor<ListKind.Witness> functor = ListFunctor.INSTANCE;
Kind<ListKind.Witness, String> strings = LIST.widen(List.of("one", "two"));
Kind<ListKind.Witness, Integer> lengths = functor.map(String::length, strings);
// [3, 3]
Laws:
- Identity:
map(x -> x, fa) == fa - Composition:
map(g.compose(f), fa) == map(g, map(f, fa))
When To Use: Simple transformations where the context (container structure) stays the same.
Related: Functor Documentation
Monad
Definition: A type class that extends Applicative with the ability to chain dependent computations (flatMap/bind).
Core Operation:
flatMap(Function<A, Kind<F,B>> f, Kind<F,A> ma)- Chain computations where each depends on the previous result
Additional Operations:
flatMap2/3/4/5(...)- Combine multiple monadic values with a function that returns a monadic value (similar tomap2/3/4/5but with effectful combining function)as(B value, Kind<F,A> ma)- Replace the result while preserving the effectpeek(Consumer<A> action, Kind<F,A> ma)- Perform side effect without changing the value
Example:
Monad<OptionalKind.Witness> monad = OptionalMonad.INSTANCE;
// Chain dependent operations
Kind<OptionalKind.Witness, String> result =
monad.flatMap(
userId -> monad.flatMap(
profile -> findAccount(profile.accountId()),
findProfile(userId)
),
findUser("user123")
);
// Combine multiple monadic values with effectful result
Kind<OptionalKind.Witness, Order> order =
monad.flatMap2(
findUser("user123"),
findProduct("prod456"),
(user, product) -> validateAndCreateOrder(user, product)
);
Laws:
- Left Identity:
flatMap(f, of(a)) == f(a) - Right Identity:
flatMap(of, m) == m - Associativity:
flatMap(g, flatMap(f, m)) == flatMap(x -> flatMap(g, f(x)), m)
When To Use: Sequential operations where each step depends on the previous result (database queries, async workflows, error handling pipelines).
Related: Monad Documentation
Monoid
Definition: A type class for types that have an associative binary operation (combine) and an identity element (empty). Extends Semigroup by adding the identity element, making it safe for reducing empty collections.
Core Operations:
empty()- The identity elementcombine(A a1, A a2)- Associative binary operation (from Semigroup)combineAll(Iterable<A> elements)- Combine all elements in a collectioncombineN(A value, int n)- Combine a value with itself n timesisEmpty(A value)- Test if a value equals the empty element
Example:
Monoid<Integer> intAddition = Monoids.integerAddition();
// Identity law: empty is the neutral element
intAddition.combine(5, intAddition.empty()); // 5
intAddition.combine(intAddition.empty(), 5); // 5
// Combine a collection
List<Integer> numbers = List.of(1, 2, 3, 4, 5);
Integer sum = intAddition.combineAll(numbers); // 15
// Repeated application
Integer result = intAddition.combineN(3, 4); // 12 (3+3+3+3)
// Working with Optional values
Monoid<Optional<Integer>> maxMonoid = Monoids.maximum();
Optional<Integer> max = maxMonoid.combineAll(
List.of(Optional.of(5), Optional.empty(), Optional.of(10))
); // Optional[10]
Common Instances in Monoids utility:
integerAddition(),longAddition(),doubleAddition()- Numeric additionintegerMultiplication(),longMultiplication(),doubleMultiplication()- Numeric multiplicationstring()- String concatenationlist(),set()- Collection concatenation/unionbooleanAnd(),booleanOr()- Boolean operationsfirstOptional(),lastOptional()- First/last non-empty Optionalmaximum(),minimum()- Max/min value aggregation with Optional
Laws:
- Left Identity:
combine(empty(), a) == a - Right Identity:
combine(a, empty()) == a - Associativity:
combine(a, combine(b, c)) == combine(combine(a, b), c)(from Semigroup)
When To Use: Aggregating data (summing values, concatenating strings), reducing collections, folding data structures, accumulating results in parallel computations.
Related: Semigroup and Monoid Documentation
Semigroup
Definition: A type class for types that have an associative binary operation. The most fundamental algebraic structure for combining values.
Core Operation:
combine(A a1, A a2)- Associative binary operation
Example:
Semigroup<String> stringConcat = Semigroups.string();
String result = stringConcat.combine("Hello", " World"); // "Hello World"
// With custom delimiter
Semigroup<String> csvConcat = Semigroups.string(", ");
String csv = csvConcat.combine("apple", "banana"); // "apple, banana"
// For error accumulation in Validated
Semigroup<String> errorAccumulator = Semigroups.string("; ");
Applicative<Validated.Witness<String>> validator =
ValidatedMonad.instance(errorAccumulator);
// Errors are combined: "Field A is invalid; Field B is required"
Common Instances in Semigroups utility:
string()- Basic string concatenationstring(String delimiter)- String concatenation with delimiterlist()- List concatenationset()- Set unionfirst()- Always takes the first valuelast()- Always takes the last value
Laws:
- Associativity:
combine(a, combine(b, c)) == combine(combine(a, b), c)
When To Use: Error accumulation (especially with Validated), combining partial results, building aggregators where an empty/identity value doesn't make sense.
Related: Semigroup and Monoid Documentation
MonadError
Definition: A type class that extends Monad with explicit error handling capabilities for a specific error type.
Core Operations:
raiseError(E error)- Create an error statehandleErrorWith(Kind<F,A> ma, Function<E, Kind<F,A>> handler)- Recover from errors
Example:
MonadError<EitherKind.Witness<String>, String> monadError = EitherMonadError.instance();
Kind<EitherKind.Witness<String>, Double> result =
monadError.handleErrorWith(
divideOperation,
error -> monadError.of(0.0) // Provide default on error
);
When To Use: Workflows that need explicit error handling and recovery (validation, I/O operations, API calls).
Related: MonadError Documentation
Profunctor
Definition: A type class for types that are contravariant in their first parameter (input) and covariant in their second parameter (output). The canonical example is Function<A, B>.
Core Operations:
lmap(Function<C,A> f, Kind2<P,A,B> pab)- Pre-process the input (contravariant)rmap(Function<B,D> g, Kind2<P,A,B> pab)- Post-process the output (covariant)dimap(Function<C,A> f, Function<B,D> g, Kind2<P,A,B> pab)- Transform both simultaneously
Example:
Profunctor<FunctionKind.Witness> prof = FunctionProfunctor.INSTANCE;
Function<String, Integer> stringLength = String::length;
Kind2<FunctionKind.Witness, String, Integer> kindFunc = FUNCTION.widen(stringLength);
// Adapt to work with integers (converting to string first)
Kind2<FunctionKind.Witness, Integer, Integer> intLength =
prof.lmap(Object::toString, kindFunc);
When To Use: Building adaptable pipelines, API adapters, validation frameworks that need to work with different input/output formats.
Related: Profunctor Documentation
Selective
Definition: A type class that sits between Applicative and Monad, providing conditional effects with static structure. All branches must be known upfront, enabling static analysis.
Core Operations:
select(Kind<F, Choice<A,B>> fab, Kind<F, Function<A,B>> ff)- Conditionally apply a functionwhenS(Kind<F, Boolean> cond, Kind<F, Unit> effect)- Execute effect only if condition is trueifS(Kind<F, Boolean> cond, Kind<F, A> then, Kind<F, A> else)- If-then-else with visible branches
Example:
Selective<IOKind.Witness> selective = IOSelective.INSTANCE;
// Only log if debug is enabled
Kind<IOKind.Witness, Boolean> debugEnabled =
IO_KIND.widen(IO.delay(() -> config.isDebug()));
Kind<IOKind.Witness, Unit> logEffect =
IO_KIND.widen(IO.fromRunnable(() -> log.debug("Debug info")));
Kind<IOKind.Witness, Unit> conditionalLog = selective.whenS(debugEnabled, logEffect);
When To Use: Feature flags, conditional logging, configuration-based behaviour, multi-source fallback strategies.
Related: Selective Documentation
Data Types and Structures
Choice
Definition: A type representing a choice between two alternatives, similar to Either but used specifically in the context of Selective functors. Can be Left<A> (needs processing) or Right<B> (already processed).
Example:
// Helper methods in Selective interface
Choice<String, Integer> needsParsing = Selective.left("42");
Choice<String, Integer> alreadyParsed = Selective.right(42);
// In selective operations
Kind<F, Choice<String, Integer>> input = ...;
Kind<F, Function<String, Integer>> parser = ...;
Kind<F, Integer> result = selective.select(input, parser);
// Parser only applied if Choice is Left
Related: Selective Documentation
Unit
Definition: A type with exactly one value (Unit.INSTANCE), representing the completion of an operation that doesn't produce a meaningful result. The functional equivalent of void, but usable as a type parameter.
Example:
// IO action that performs a side effect
Kind<IOKind.Witness, Unit> printAction =
IO_KIND.widen(IO.fromRunnable(() -> System.out.println("Hello")));
// Optional as MonadError<..., Unit>
MonadError<OptionalKind.Witness, Unit> optionalMonad = OptionalMonad.INSTANCE;
Kind<OptionalKind.Witness, String> empty =
optionalMonad.raiseError(Unit.INSTANCE); // Creates Optional.empty()
When To Use:
- Effects that don't return a value (logging, printing, etc.)
- Error types for contexts where absence is the only error (Optional, Maybe)
Related: Core Concepts
Const
Definition: A constant functor that wraps a value of type C whilst ignoring a phantom type parameter A. The second type parameter exists purely for type-level information and has no runtime representation.
Structure: Const<C, A> where C is the concrete value type and A is phantom.
Example:
// Store a String, phantom type is Integer
Const<String, Integer> stringConst = new Const<>("hello");
String value = stringConst.value(); // "hello"
// Mapping over the phantom type changes the signature but not the value
Const<String, Double> doubleConst = stringConst.mapSecond(i -> i * 2.0);
System.out.println(doubleConst.value()); // Still "hello" (unchanged!)
// Bifunctor allows transforming the actual value
Bifunctor<ConstKind2.Witness> bifunctor = ConstBifunctor.INSTANCE;
Const<Integer, Double> intConst = CONST.narrow2(bifunctor.bimap(
String::length,
i -> i * 2.0,
CONST.widen2(stringConst)
));
System.out.println(intConst.value()); // 5
When To Use:
- Implementing van Laarhoven lenses and folds
- Accumulating values whilst traversing structures
- Teaching phantom types and their practical applications
- Building optics that extract rather than modify data
Related: Phantom Type, Bifunctor, Const Type Documentation
Optics Terminology
At
Definition: A type class for structures that support indexed access with insertion and deletion semantics. Provides a Lens<S, Optional<A>> where setting to Optional.empty() deletes the entry and setting to Optional.of(value) inserts or updates it.
Core Operations:
at(I index)- ReturnsLens<S, Optional<A>>for the indexget(I index, S source)- Read value at index (returns Optional)insertOrUpdate(I index, A value, S source)- Insert or update entryremove(I index, S source)- Delete entry at indexmodify(I index, Function<A,A> f, S source)- Update value if present
Example:
At<Map<String, Integer>, String, Integer> mapAt = AtInstances.mapAt();
Map<String, Integer> scores = new HashMap<>(Map.of("alice", 100));
// Insert new entry
Map<String, Integer> withBob = mapAt.insertOrUpdate("bob", 85, scores);
// Result: {alice=100, bob=85}
// Remove entry
Map<String, Integer> noAlice = mapAt.remove("alice", withBob);
// Result: {bob=85}
// Compose with Lens for deep access
Lens<UserProfile, Optional<String>> themeLens =
settingsLens.andThen(mapAt.at("theme"));
When To Use: CRUD operations on maps or lists where you need to insert new entries or delete existing ones whilst maintaining immutability and optics composability.
Related: At Type Class Documentation
Iso (Isomorphism)
Definition: An optic representing a lossless, bidirectional conversion between two types. If you can convert A to B and back to A without losing information, you have an isomorphism.
Core Operations:
get(S source)- Convert from S to AreverseGet(A value)- Convert from A to S
Example:
// String and List<Character> are isomorphic
Iso<String, List<Character>> stringToChars = Iso.iso(
s -> s.chars().mapToObj(c -> (char) c).collect(Collectors.toList()),
chars -> chars.stream().map(String::valueOf).collect(Collectors.joining())
);
List<Character> chars = stringToChars.get("Hello"); // ['H', 'e', 'l', 'l', 'o']
String back = stringToChars.reverseGet(chars); // "Hello"
When To Use: Converting between equivalent representations (e.g., Celsius/Fahrenheit, String/ByteArray, domain models and DTOs with no information loss).
Related: Iso Documentation
Lens
Definition: An optic for working with product types (records with fields). Provides a composable way to get and set fields in immutable data structures.
Core Operations:
get(S source)- Extract a field valueset(A newValue, S source)- Create a new copy with updated fieldmodify(Function<A,A> f, S source)- Update field using a function
Example:
@GenerateLenses
public record Address(String street, String city) {}
@GenerateLenses
public record Company(String name, Address address) {}
@GenerateLenses
public record Employee(String name, Company company) {}
// Compose lenses for deep updates
Lens<Employee, String> employeeToStreet =
EmployeeLenses.company()
.andThen(CompanyLenses.address())
.andThen(AddressLenses.street());
// Update nested field in one line
Employee updated = employeeToStreet.set("456 New St", originalEmployee);
Related: Lenses Documentation
Prism
Definition: An optic for working with sum types (sealed interfaces, Optional, Either). Provides safe access to specific variants within a discriminated union.
Core Operations:
preview(S source)- Try to extract a variant (returns Optional)review(A value)- Construct the sum type from a variantmodify(Function<A,A> f, S source)- Update if variant matches
Example:
@GeneratePrisms
public sealed interface PaymentMethod {
record CreditCard(String number) implements PaymentMethod {}
record BankTransfer(String iban) implements PaymentMethod {}
}
Prism<PaymentMethod, String> creditCardPrism =
PaymentMethodPrisms.creditCard().andThen(CreditCardLenses.number());
// Safe extraction
Optional<String> cardNumber = creditCardPrism.preview(payment);
// Conditional update
PaymentMethod masked = creditCardPrism.modify(num -> "****" + num.substring(12), payment);
Related: Prisms Documentation
Traversal
Definition: An optic for working with multiple values within a structure (lists, sets, trees). Allows bulk operations on all elements.
Core Operations:
modifyF(Applicative<F> app, Function<A, Kind<F,A>> f, S source)- Effectful modification of all elementstoList(S source)- Extract all focused values as a list
Example:
@GenerateLenses
public record Order(String id, List<LineItem> items) {}
Traversal<Order, LineItem> orderItems =
OrderLenses.items().asTraversal();
// Apply bulk update
Order discounted = orderItems.modify(
item -> item.withPrice(item.price() * 0.9),
order
);
Related: Traversals Documentation
Contributing to Java HKT Simulation
First off, thank you for considering contributing! This project is a simulation to explore Higher-Kinded Types in Java, and contributions are welcome.
This document provides guidelines for contributing to this project.
Code of Conduct
This project and everyone participating in it is governed by the Code of Conduct. By participating, you are expected to uphold this code. Please report unacceptable behavior to simulation.hkt@gmail.com.
How Can I Contribute?
Reporting Bugs
- Ensure the bug was not already reported by searching on GitHub under Issues.
- If you're unable to find an open issue addressing the problem, open a new one. Be sure to include a title and clear description, as much relevant information as possible, and a code sample or an executable test case demonstrating the expected behavior that is not occurring.
- Use the "Bug Report" issue template if available.
Suggesting Enhancements
- Open a new issue to discuss your enhancement suggestion. Please provide details about the motivation and potential implementation.
- Use the "Feature Request" issue template if available.
Your First Code Contribution
Unsure where to begin contributing? You can start by looking through good first issue or help wanted issues (you can add these labels yourself to issues you think fit).
Pull Requests
- Fork the repository on GitHub.
- Clone your fork locally:
git clone git@github.com:higher-kinded-j/higher-kinded-j.git - Create a new branch for your changes:
git checkout -b name-of-your-feature-or-fix - Make your changes. Ensure you adhere to standard Java coding conventions.
- Add tests for your changes. This is important!
- Run the tests: Make sure the full test suite passes using
./gradlew test. - Build the project: Ensure the project builds without errors using
./gradlew build. - Commit your changes: Use clear and descriptive commit messages.
git commit -am 'Add some feature' - Push to your fork:
git push origin name-of-your-feature-or-fix - Open a Pull Request against the
mainbranch of the original repository. - Describe your changes in the Pull Request description. Link to any relevant issues (e.g., "Closes #123").
- Ensure the GitHub Actions CI checks pass.
Development Setup
- You need a Java Development Kit (JDK), version 24 or later.
- This project uses Gradle. You can use the included Gradle Wrapper (
gradlew) to build and test.- Build the project:
./gradlew build - Run tests:
./gradlew test - Generate JaCoCo coverage reports:
./gradlew test jacocoTestReport(HTML report atbuild/reports/jacoco/test/html/index.html)
- Build the project:
Coding Style
Please follow the Google Java Style Guide. Keep code simple, readable, and well-tested. Consistent formatting is encouraged.
Thank you for contributing!
Contributor Covenant Code of Conduct
Our Pledge
We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community.
Our Standards
Examples of behavior that contributes to a positive environment for our community include:
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience
- Focusing on what is best not just for us as individuals, but for the overall community
Examples of unacceptable behavior include:
- The use of sexualized language or imagery, and sexual attention or advances of any kind
- Trolling, insulting or derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or email address, without their explicit permission
- Other conduct which could reasonably be considered inappropriate in a professional setting
Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful.
Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate.
Scope
This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at simulation.hkt@gmail.com. All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the reporter of any incident.
Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct:
1. Correction
Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community.
Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested.
2. Warning
Community Impact: A violation through a single incident or series of actions.
Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban.
3. Temporary Ban
Community Impact: A serious violation of community standards, including sustained inappropriate behavior.
Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban.
4. Permanent Ban
Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals.
Consequence: A permanent ban from any sort of public interaction within the community.
Attribution
This Code of Conduct is adapted from the Contributor Covenant, version 2.1, available at https://www.contributor-covenant.org/version/2/1/code_of_conduct.html.
Community Impact Guidelines were inspired by Mozilla's code of conduct enforcement ladder.
For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.
MIT License
Copyright (c) 2025 Magnus Smith
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.