Complexity, still the primary cause of failure and delay of IT projects, arises from the interaction of concerns.
The fundamental concerns in IT systems are code, form, data, and time — their subtle interactions and couplings combine to have a suffocating impact on software projects.
Our approach to building IT systems is defined by a unique methodology and a set of supporting tools that allow us to keep these four concerns separate, resulting in:
Rapid development of large IT systems
Increased levels of agility and software re-use
Lower maintenance costs
Secure, reliable systems that are robust yet flexible
We believe that systems should be built on the conceptual bedrock of a dual timeline. This provides a universal set of essential capabilities for organizations who operate within disparate and ever-evolving landscapes of information, and who require an accurate and consistent view of information at any point: past, present and future.
The main timeline is fully controllable and it allows for:
The integration of imported records
The previewable orchestration of future states
The secondary timeline is immutable and provides a guaranteed audit trail, 'built-in'.
Data is fundamental to the creation of software. It will routinely outlive the interpretations we derive from it and the applications we build on top of it. Therefore, we will always seek to lay the foundations of data management independently of a particular application or use-case, and above a common timeline.
Table-based structures, found in spreadsheets and traditional SQL databases, are helpful for displaying and analyzing data, but they are not an appropriate format for storing data.
Tables are insufficient for representing data in its purest form, and this is particularly obvious when modeling sparse data or multi-valued attributes, or coping with the evolution of data over time.
Instead, we believe that data is best recorded as units of facts and relationships, without constraints.
More simply put, we believe graph-oriented systems point us towards the correct approach for data management, and organizations, such as Facebook and Google have proved how powerful the widespread use of graph-structured data can be.
Organizations can unlock different amounts of value from data depending on how they interpret it, how they shape it and the constraints they place on it.
Complex types & tables, schemas & structures, classes & containers, models & metadata, formats & encodings, representations.
These are the forms we construct for interpreting, analyzing and extracting value from our data. Tools such as relational schemas and ontologies are common structures we use for understanding, analyzing and managing our information.
Too often, software developers will attempt to "codify" these structures into their software as types and objects.
By understanding that schema is just another form of data, on a timeline, we let our information systems grow and flex with the businesses they serve. This is how we are able to build maintainable models of the real world.
The software patterns and practices that underlie your application architecture are certainly important, but less fundamentally relevant than making the correct choices with regards to the structure of your data in time. Assuming those other foundations are in place, we believe that the best approach to code is to keep it short, readable and expressive.
In reality this means we prefer:
Writing modular & generic libraries of functions that operate over many types of data structures
Minimizing the use of state wherever possible
Maintaining declarative structures that avoid the need for reasoning about imperative control flows
By pulling out the state and domain, the remaining code is simple. Developers can focus on writing code to implement business logic, pure functional calculations and derivations, with resulting actions that automate the execution of business processes.
As software complexity is tamed, productivity soars.
We select and develop particular tools and technologies that compose harmoniously, chosen to support our methodology.
XTDB is our unbundled bitemporal graph database:
Query with Datalog, EQL, or SQL
Pluggable backends include PostgreSQL, Kafka, RocksDB and many more
XTDB provides the data 'time-line' as a value, storing both 'valid' time and 'transaction' time. Queries that are often impossible in other databases are made simple in XTDB.
Many object-oriented programming languages coax the developer into coupling code with data.
Functional programming languages pull them apart. Our go-to functional programming language — Clojure — goes further, providing language features to protect simplicity.
JUXT's leadership is made up of experienced software consultants who have led Agile projects, run Agile training courses and brought Agile methodologies into large organizations.
We have taken the best parts of Agile methodologies and combined them into a process that scales and delivers. We deploy often, provide regular showcases and encourage daily stand-ups. We use best-of-breed tools to provide transparency and to manage project work-streams.
We’ve been building systems on AWS since early 2014, over that time we’ve accumulated first-hand knowledge and expertise of managing and automating AWS infrastructure.
Our AWS systems have been successfully audited for PCI DSS compliance and won praise for their operational stability, information security and regulatory compliance. We have experience in meeting GDPR requirements and protecting data privacy.