call_end

    • chevron_right

      Prosodical Thoughts: Prosody 13.0.0 released!

      news.movim.eu / PlanetJabber • 17 March • 7 minutes

    Welcome to a new major release of the Prosody XMPP server! While the 0.12 branch has served us well for a while now, this release brings a bunch of new features we’ve been busy polishing.

    If you’re unfamiliar with Prosody, it’s an open-source project that implements XMPP , an open standard protocol for online communication. Prosody is widely used to power everything from small self-hosted messaging servers to worldwide real-time applications such as Jitsi Meet. It’s part of a large ecosystem of compatible software that you can use for realtime online communication.

    Before we begin…

    The first thing anyone who has been following the project for a while will notice about this release is the version number.

    Long adherents of the cult of 0ver , we finally decided it was time to break away. While, as Shakespeare wrote, “That which we call a rose, by any other name would smell as sweet”, such is true of version numbers. Prosody has been stable and used in production deployments for many years, however the ‘0.x’ version number occasionally misled people to believe otherwise. Apart from shifting the middle component leftwards, nothing has changed.

    If you’re really curious, you can read full details in our versioning and support policy .

    Our version numbers have also been in step with Debian’s for several versions now. Could this become a thing? Maybe!

    Overview of changes

    This release brings a wide range of improvements that make Prosody more secure, performant, and easier to manage than ever before. Let’s review the most significant changes that administrators and users can look forward to across a range of different topics.

    Security and authentication

    Security takes centre stage in this release with several notable improvements. Building on DNSSEC, the addition of full DANE support for server-to-server connections strengthens the trust between federating XMPP servers.

    We’ve enhanced our support for channel binding, which is now compatible with TLS 1.3, and we’ve added support for XEP-0440 which helps clients know which channel binding methods the server supports. Channel binding protects your connection from certain machine-in-the-middle attacks, even if the server’s TLS certificate is compromised.

    Account management

    Administrators now have more granular control over user accounts with the ability to disable and enable them as needed. This can be particularly useful for public servers, where disabling an account can act as a reversible alternative to deletion.

    In fact, we now have the ability to set a grace period for deleted accounts to allow restoring an account (within the grace period) in case of accidental deletion.

    Roles and permissions

    A new role and permissions framework provides more flexible access control. Prosody supplies several built-in roles:

    • prosody:operator - for operators of the whole Prosody instance. By default, accounts with this role have full access, including to operations that affect the whole server.
    • prosody:admin - the usual role for admins of a specific virtual host (or component). Accounts with this role have permission to manage user accounts and various other aspects of the domain.
    • prosody:member - this role is for “normal” user accounts, but specifically those ones which are trusted to some extent by the administrators. Typically accounts that are created through an invitation or through manual provisioning by the admin have this role.
    • prosody:registered - this role is also for general user accounts, but is used by default for accounts which registered themselves, e.g. if the server has in-band registration enabled.
    • prosody:guest - finally, the “guest” role is used for temporary/anonymous accounts and is also the default for remote JIDs interacting with the server.

    For more details about how to use these roles, customize permissions, and more, read our new roles and permissions documentation . You will also find the link there for the development documentation, so module developers can make use of the new system.

    Shell commands

    Since the earliest releases, the prosodyctl command has been the admin’s primary way of managing and interacting with Prosody. In 0.12 we introduced the prosodyctl shell interface to send administrative commands to Prosody at runtime via a local connection. It has been a big success, and this release significantly extends its capabilities.

    • prosodyctl adduser/passwd/deluser commands now use the admin connection to create users, which improves compatibility with various storage and authentication plugins. It also ensures Prosody can instantly respond to changes, such as immediately disconnecting users when their account is deleted.
    • Pubsub management commands have been added, to create/configure/delete nodes and items on pubsub and PEP services without needing an XMPP client.
    • One of our own favourites as Prosody developers is the new prosodyctl shell watch log command, which lets you stream debug logs in real-time without needing to store them on the filesystem.
    • Similarly, there is now prosodyctl shell watch stanzas which lets you monitor stanzas to/from arbitrary JIDs, which is incredibly helpful for developers trying to diagnose various client issues.
    • Server-wide announcements can now be sent via the shell, optionally limiting the recipients by online status or role.
    • MUC has gained a few new commands for interacting with MUC rooms.

    Improved Multi-User Chat (MUC) Management

    The MUC system has received a significant overhaul focusing on security and administrative control. By default, room creation is now restricted to local users, providing better control over who can create persistent and public rooms.

    Server administrators get new shell commands to inspect room occupants and affiliations, making day-to-day operations more straightforward.

    One notable change is that component admins are no longer automatically owners. This can be reverted to the old behaviour with component_admins_as_room_owners = true in the config, but this has known incompatibilities with some clients. Instead, admins can use the shell or ad-hoc commands to gain ownership of rooms when it’s necessary.

    Better Network Performance

    Network connectivity sees substantial improvements with the implementation of RFC 8305’s “Happy Eyeballs” algorithm, which enhances IPv4/IPv6 dual-stack performance and increases the chance of a successful connection.

    Support for TCP Fast Open and deferred accept capabilities (in the server_epoll backend) can potentially reduce connection latency.

    The server now also better handles SRV record selection by respecting the ‘weight’ parameter, leading to more efficient connection distribution.

    Storage and Performance Improvements

    Under the hood, Prosody now offers better query performance with its internal archive stores by generating indexes.

    SQLite users now have the option to use LuaSQLite3 instead of LuaDBI, potentially offering better performance and easier deployment.

    We’ve also added compatibility with SQLCipher , a fork of SQLite that adds support for encrypted databases.

    Configuration Improvements

    The configuration system has been modernized to support referencing and appending to previously set options, making complex configurations more manageable.

    While direct Lua API usage in the config file is now deprecated, it remains accessible through the new Lua.* namespace for those who need it.

    Also new in this release is the ability to reference credentials or other secrets in the configuration file, without storing them in the file itself. It is compatible with the credentials mechanisms supported by systemd , podman and more.

    Developer/API changes

    The development experience has always been an important part of our project - we set out to make an XMPP server that was very easy to extend and customize. Our developer API has improved with every release. We’ve even had first-time coders write Prosody plugins!

    There are too many improvements to list here, but some notable ones:

    • Storage access from modules has been simplified with a new ‘keyval+’ store type, which combines the old ‘keyval’ (default) and ‘map’ stores into a single interface. Before this change, many modules had to open the store twice to utilize the two APIs.
    • Any module can now replace custom permission handling with Prosody’s own permission framework via the simple module:may() API call.
    • Providing new commands for prosodyctl shell is now much easier for module developers.

    Backwards compatibility is of course generally preserved, although is_admin() has been deprecated in favour of module:may() . Modules that want to remain compatible with older versions can use mod_compat_roles to enable (limited) permission functionality.

    Important Notes for Upgrading

    A few breaking changes are worth noting:

    • Lua 5.1 support has been removed (this also breaks compatibility with LuaJIT, which is based primarily on Lua 5.1).
    • Some MUC default behaviors have changed regarding room creation and admin permissions (see above).

    Conclusion

    We’re very excited about this release, which represents a significant step forward for Prosody, and contains improvements for virtually every aspect of the server. From enhanced security to better performance and more flexible administration tools, there has never been a better time to deploy Prosody and take control of your realtime communications.

    As always, if you have any problems or questions with Prosody or the new release, drop by our community chat !

    • wifi_tethering open_in_new

      This post is public

      blog.prosody.im /prosody-13.0.0-released/

    • chevron_right

      Erlang Solutions: Elixir vs Haskell: What’s the Difference?

      news.movim.eu / PlanetJabber • 13 March • 11 minutes

    Elixir and Haskell are two very powerful, very popular programming languages. However, each has its strengths and weaknesses. Whilst they are similar in a few ways, it’s their differences that make them more suitable for certain tasks.

    Here’s an Elixir vs Haskell comparison.

    Elixir vs Haskell: a comparison

    Core philosophy and design goals

    Starting at a top-level view of both languages, the first difference we see is in their fundamental philosophies. Both are functional languages. However, their design choices reflect very different priorities.

    Elixir is designed for the real world. It runs on the Erlang VM (BEAM), which was built to handle massive concurrency, distributed systems, and applications that can’t afford downtime, like telecoms, messaging platforms, and web apps.

    Elixir prioritises:

    • Concurrency-first – It uses lightweight processes and message passing to make scalability easier.
    • Fault tolerance – It follows a “Let it crash” philosophy to ensure failures don’t take down the whole system.
    • Developer-friendly – Its Ruby-like syntax makes functional programming approachable and readable.

    Elixir is not designed for theoretic rigidness—it’s practical. It gives you the tools you need to build robust, scalable systems quickly, even if that means allowing some flexibility in functional integrity.

    Haskell, on the other hand, is all about mathematical precision. It enforces a pure programming model. As a result, functions don’t have side effects, and data is immutable by default. This makes it incredibly powerful for provably correct, type-safe programs, but it also comes with a steeper learning curve.

    We would like to clarify that Elixir’s data is also immutable, but it does a great job of hiding that fact. You can “reassign” variables and ostensibly change values, but the data underneath remains unchanged. It’s just that Haskell doesn’t allow that at all.

    Haskell offers:

    • Pure functions – No surprises; given the same input, a function will always return the same output.
    • Static typing with strong guarantees – The type system (with Hindley-Milner inference, monads, and algebraic data types) helps catch errors at compile time instead of run time.
    • Lazy evaluation – Expressions aren’t evaluated until they’re needed, optimising performance but adding complexity.

    Haskell is a language for those who prioritise correctness, mathematical rigour, and abstraction over quick iterations and real-world flexibility. That does not mean it’s slower and inflexible. In fact, experienced Haskellers will use its strong type guarantees to iterate faster, relying on its compiler to catch any mistakes. However, it does contrast with Elixir’s gradual tightening approach. Here, interaction between processes is prioritised, and initial development is quick and flexible, becoming more and more precise as the system evolves.

    Typing: dynamic vs static

    The next significant difference between Elixir and Haskell is how they handle types.

    Elixir is dynamically typed. It doesn’t require explicitly declared variable types; it will infer them at run time. As a result, it’s fast to write and easy to prototype. It allows you to focus on functionality rather than defining types up front.

    Of course, there’s a cost attached to this flexibility. If variables are computed at run time, any errors are also only detected then. Mistakes that could have been caught earlier come up when the code is executed. In a large project, this can make debugging a nightmare.

    For example:

    def add(a, b), do: a + b  
    
    IO.puts add(2, 3)      # Works fine
    IO.puts add(2, "three") # Causes a runtime error
    

    In this example, “three” is a string but should’ve been a number and is going to return an error. Since it doesn’t type check at compile time, the error will only be caught when the function runs.

    Meanwhile, Haskell uses static typing, which means all variable types are checked at compile time. If there’s a mismatch, the code won’t compile. This is very helpful in preventing many classes of bugs before the code execution.

    For example:

    add :: Int -> Int -> Int
    add a b = a + b
    
    main = print (add 2 3)    -- Works fine
    main = print (add 2 "three")  -- Compile-time error
    
    

    Here, the compiler will immediately catch the type mismatch and prevent runtime errors.

    Elixir’s dynamic typing gives you faster iteration and more flexible development. However, it doesn’t rely only on dynamic typing for its robustness. Instead, it follows Erlang’s “Golden Trinity” philosophy, which is:

    • Fail fast instead of trying to prevent all possible errors.
    • Maintain system stability with supervision trees, which automatically restart failed processes.
    • Use the BEAM VM to isolate failures so they don’t crash the system.

    Haskell’s static typing, on the other hand, gives you long-term maintainability and correctness up front. It’s particularly useful in high-assurance software projects, where errors must be kept to a minimum before execution.

    In comparison, Elixir is a popular choice for high-availability systems. Both are highly reliable, but the former is okay with failure and relies on recovery at runtime, whilst the latter enforces correctness at compile-time.

    Concurrency vs parallelism

    When considering Haskell vs Elixir, concurrency is one of the biggest differentiators. Both Elixir and Haskell are highly concurrent but take different approaches to it. Elixir is built for carrying out a massive number of processes simultaneously. In contrast, Haskell gives you powerful—but more manual—tools for parallel execution.

    Elixir manages effortless concurrency with BEAM. The Erlang VM is designed to handle millions of lightweight processes at the same time with high fault tolerance. These lightweight processes follow the actor model principles and are informally called “actors”, although Elixir doesn’t officially use this term.

    Unlike traditional OS threads, these processes are isolated and communicate through message-passing. That means that if one process crashes, BEAM uses supervision trees to restart it automatically while making sure it doesn’t affect the others. This is typical of the ‘let it crash’ philosophy, where failures are expected and handled. There is no expectation to eliminate them entirely.

    As a result, concurrency in Elixir is quite straightforward. You don’t need to manage locks, threads, or shared memory. Load balancing is managed efficiently by the BEAM scheduler across CPU cores, with no manual tuning required.

    Haskell also supports parallelism and concurrency but it requires more explicit management. To achieve this, it uses several concurrency models, including software transactional memory (STM), lazy evaluations, and explicit parallelism to efficiently utilise multicore processors.

    As a result, even though managing parallelism is more hands-on in Haskell, it also leads to some pretty significant performance advantages. For certain workloads, it can be several orders of magnitude faster than Elixir.

    Additionally, Cloud Haskell extends Haskell’s concurrency model beyond a single machine. Inspired by Erlang’s message-passing approach, it allows distributed concurrency across multiple nodes, making Haskell viable for large-scale concurrent systems—not just parallel computations.

    Scaling and parallelism continue to be one of the headaches of distributed programming. Find out what the others are.
    [ Read more ]

    Best-fit workloads

    Both Haskell and Elixir are highly capable, but the kinds of workloads for which they’re suitable are different. We’ve seen how running on the Erlang VM allows Elixir to be more fault-tolerant and support massive concurrency. It can also run processes along multiple nodes for seamless communication.

    Since Elixir can scale horizontally very easily—across multiple machines—it works really well for real-time applications like chat applications, IoT platforms, and financial transaction processing.

    Haskell optimises performance with parallel execution and smart use of system resources.  It doesn’t have BEAM’s actor-based concurrency model but its powerful programming features that allow you to make fine-grained use of multi-core processors more than make up for it.

    It’s perfect for applications where you need heavy numerical computations, granular control over multi-core execution, and deterministic performance.

    So, where Elixir excels at processing high volumes of real-time transactions, Haskell works better for modelling, risk analysis, and regulatory compliance.

    Ecosystem and tooling

    Both Elixir and Haskell have strong ecosystems, but you must have noticed the theme running through our narrative. Yes, both are designed for different industries and development styles.

    Elixir’s ecosystem is practical and industry-focused, with a strong emphasis on web development and real-time applications. It has a growing community and a well-documented standard library, supplemented with production-ready libraries.

    Meanwhile, Haskell has a highly dedicated community in academia, finance, human therapeutics, wireless communications and networking, and compiler development. It offers powerful libraries for mathematical modelling, type safety, and parallel computing. However, tooling can sometimes feel less user-friendly compared to mainstream languages.

    For web development, Elixir offers the Phoenix framework: a high-performance web framework designed for real-time applications, which comes with built-in support for WebSockets and scalability. It follows Elixir’s functional programming principles but keeps development accessible with a syntax inspired by Ruby on Rails.

    Haskell’s Servant framework is a type-safe web framework that leverages the language’s static typing to ensure API correctness. While powerful, it comes with a steeper learning curve due to Haskell’s strict functional nature.

    Which one you should choose depends on your project’s requirements. If you’re looking for general web and backend development, Elixir’s Phoenix is the more practical choice. For research-heavy or high-assurance software, Haskell’s ecosystem provides formal guarantees.

    Maintainability and refactoring

    It’s important to manage technical debt while keeping software maintainable. Part of this is improving quality and future-proofing the code. Elixir’s syntax is clean and intuitive. It offers dynamic typing, meaning you can write code quickly without specifying types. This can make runtime errors harder to track sometimes, but debugging tools like IEx (Interactive Elixir) and Logger make troubleshooting straightforward.

    It’s also easier to refactor because of its dynamic nature and process isolation. Since BEAM isolates processes, refactoring can often be done incrementally without disrupting the rest of the system. This is particularly handy in long-running, real-time applications where uptime is crucial.

    Haskell, on the other hand, enforces strict type safety and a pure functional model, which makes debugging less frequent but more complex. As we mentioned earlier, the compiler catches most issues before runtime, reducing unexpected behaviour.

    However, this strictness means that refactoring in Haskell must be done carefully to maintain type compatibility, module integrity, and scope resolution. Unlike dynamically typed languages, where refactoring is often lightweight, Haskell’s strong type system and module dependencies can make certain refactorings more involved, especially when they affect function signatures or module structures.

    Research on Haskell refactoring highlights challenges like name capture, type signature compatibility, and module-level dependency management, which require careful handling to preserve correctness.

    Then, there’s pattern matching, which both languages use, but do it differently.

    Elixir’s pattern matching is flexible and widely used in function definitions and control flow, making code more readable and expressive.

    Haskell’s pattern matching is type-driven and enforced by the compiler, ensuring exhaustiveness but requiring a more upfront design.

    So, which of the two is easier to maintain?

    Elixir is better suited for fast-moving projects where codebases evolve frequently, thanks to its fault-tolerant design and incremental refactoring capabilities.

    Haskell provides stronger guarantees of correctness, making it a better choice for mission-critical applications where stability outweighs development speed.

    Compilation speed

    One often overlooked difference between Elixir and Haskell is how they handle compilation and code updates.

    Elixir benefits from BEAM’s hot code swapping, where updates can be applied without stopping a running system. Additionally, Elixir compiles faster than Haskell because it doesn’t perform extensive type checking at compile time.

    This speeds up development cycles, which is what makes Elixir well-suited for projects requiring frequent updates and rapid iteration. However, since BEAM uses Just-In-Time (JIT) compilation, some optimisations happen at runtime rather than during compilation.

    Haskell, on the other hand, has a much stricter compilation process. The compiler performs heavy type inference and optimisation, which increases compilation time but results in highly efficient, predictable code.

    Learning curve

    Elixir is often considered easier to learn than Haskell. Its syntax is clean and approachable, especially if you’re coming from Ruby, Python, or JavaScript. The dynamic typing and friendly error messages make it easy to experiment without getting caught up in strict type constraints.

    Haskell, on the other hand, has a notoriously steep learning curve. It requires a shift in mindset, especially for those unfamiliar with pure functional programming, monads, lazy evaluation, and advanced type systems. While it rewards those who stick with it, the initial learning experience can be challenging, even if you’re an experienced developer.

    Metaprogramming

    Both Elixir and Haskell allow you to write highly flexible code, but they take different approaches.

    Elixir provides macros, which you can modify and extend the language at compile time. This makes it easy to generate boilerplate code, create domain-specific languages (DSLs), and build reusable abstractions. However, improper use of macros can make code harder to debug and maintain.

    Haskell doesn’t have macros but compensates with powerful type-level programming. Features like type families and higher-kinded types allow you to enforce complex rules at the type level. This enables incredible flexibility, but it also makes the language even harder to learn.

    Choosing between the two

    As you’ve seen, both Elixir and Haskell can be great, if used correctly in the right circumstances.

    If you do choose Elixir, we’ve got a great resource that discusses how Elixir and Erlang—the language that forms its foundation—can help in future-proofing legacy systems. Find out how their reliability and scalability make them great for modernising infrastructures.

    [ Read more ]

    Want to learn more? Drop the Erlang Solutions team a message.

    The post Elixir vs Haskell: What’s the Difference? appeared first on Erlang Solutions .

    • chevron_right

      Mathieu Pasquet: slixmpp v1.9.1

      news.movim.eu / PlanetJabber • 12 March

    This is mostly a bugfix release over version 1.9.0 .

    The main fix is the rust JID implementation that would behave incorrectly when hashed if the JID contained non-ascii characters. This is an important issue as using a non-ascii JID was mostly broken, and interacting with one failed in interesting ways.

    Fixes

    • The previously mentioned JID hash issue
    • Various edge cases in the roster code
    • One edge case in the MUC ( XEP-0045 ) plugin in join_muc_wait
    • Removed one broken entrypoint from the package
    • Fixed some issues in the MUC Self-Ping ( XEP-0410 ) plugin

    Enhancements

    • Stanza objects now have a __contains__ (used by x in y ) method that allow checking if a plugin is present.
    • The You should catch Iq… exceptions message now includes the traceback
    • The MUC Self-Ping ( XEP-0410 ) plugin allows custom intervals and timeouts for each MUC.
    • Added a STRICT_INTERFACE mode (currently a global var in the stanzabase module) that controls where accessing a non-existing stanza attribute should raise or warn, it previously only warned.
    • The CI does more stuff
    • More type hints here and there

    Links

    You can find the new release on codeberg , pypi , or the distributions that package it in a short while.

    • wifi_tethering open_in_new

      This post is public

      blog.mathieui.net /en/slixmpp-1.9.1.html

    • chevron_right

      Erlang Solutions: Understanding Big Data in Healthcare

      news.movim.eu / PlanetJabber • 6 March • 7 minutes

    Healthcare generates large amounts of data every day. From patient records and medical scans to treatment plans and clinical trials. This information, known as big data, has the potential to improve patient care, improve efficiency, and drive innovation. But many organisations are still figuring out how to use it effectively.


    With AI-driven analytics, wearable technology, and real-time monitoring, healthcare providers, insurers, and pharmaceutical companies are using data to make better decisions for patients, personalise treatments, and predict health trends. So how can you do the same?

    Let’s explore the fundamentals of big data in healthcare, its real-world impact and what steps leaders can take to maximise its growing impact.

    What is Big Data?

    Big data refers to the vast amounts of structured and unstructured information from patient records, medical imaging, wearables, and clinical research. Proper analysis can improve patient care, support better decision-making, and reduce costs.

    This data comes from a wide range of sources, including electronic health records (EHRs), test results, diagnoses, medical images, and real-time data from smart wearables. It also includes healthcare-related financial and demographic information. When properly analysed, it helps identify patterns, predict health trends, and support evidence-based decision-making.

    The global healthcare market is expanding quickly and is expected to be worth USD 145.42 billion by 2033. As more organisations adopt AI-driven analytics and machine learning, data is becoming a key driver of innovation, helping healthcare professionals deliver more personalised and effective care.

    The Three V’s of Big Data

    To better understand big data, we can break it down into three key characteristics: volume, velocity, and variety.

    Big Data in Healthcare 3 v's

    1. Volume

    The industry produces massive amounts of data, from electronic health records (EHRs) and medical imaging to clinical research and wearable devices. The total volume of healthcare data doubles every 73 days. Managing this requires advanced storage solutions, such as cloud computing and NoSQL databases , to handle both structured and unstructured data effectively.

    2. Velocity

    Healthcare data is constantly being created. Real-time data streams from patient monitoring systems, wearable technology , and AI-powered diagnostics provide continuous updates. To be useful, this data must be processed instantly, allowing professionals to make fast, informed decisions that support better patient care.

    3. Variety

    Healthcare data comes in many formats, from structured databases to unstructured text, images, videos, and biometric data . Around 80% of healthcare data is unstructured, meaning it doesn’t fit neatly into traditional databases. A patient’s medical history might include lab results, prescriptions, clinician notes, and radiology reports, all in different formats. Integrating and analysing this diverse information is essential for identifying trends and improving treatments.

    Mastering these three V’s helps healthcare organisations make better use of data, leading to more accurate diagnoses, personalised treatments, and improved patient outcomes.

    Key Sources of Healthcare Data

    Now that we’ve discussed the Three V’s , it’s important to explore where this data originates. The primary sources of healthcare data contribute to the massive volumes of information, real-time updates, and diverse formats that we’ve just covered.

    Here are some of the key sources:

    • Electronic Health Records (EHRs) & Medical Records (EMRs) – Digital records containing patient histories, test results, and prescriptions.
    • Wearable Devices & Health Apps – Smartwatches, fitness trackers, and remote monitoring tools that gather real-time health metrics.
    • Medical Imaging & Genomic Data – X-rays, MRIs, and DNA sequencing that assist in diagnostics, research, and precision medicine.
    • Clinical Trials & Research Databases – Data from large-scale studies that drive medical advancements and evidence-based medicine.
    • Public Health & Epidemiological Data – Population health data that track disease trends and guide public health strategies.
    • Hospital Information Systems (HIS) & Administrative Data – Operational data that help manage resources and patient flow within healthcare facilities.

    These sources contribute to the expanding pool of healthcare data, helping organisations make smarter decisions and deliver better care for patients.

    Benefits of Big Data in Healthcare

    As healthcare organisations continue to collect more data, big data is proving to be a game-changer in improving patient care, driving clinical outcomes, and making healthcare more efficient. By analysing vast amounts of information, providers can identify trends and patterns that may have otherwise gone unnoticed. Below are some of the key benefits that big data brings to healthcare, from better patient care to more effective operations.

    Benefit Description Impact
    Improved Patient Care Identifies patterns to predict and prevent diseases, enabling personalised care. Could save the healthcare industry £230 billion to £350 billion annually through improved care and efficiency.
    Cost Reduction Optimises resource allocation, reduces waste, and improves efficiency. Predictive analytics can cut hospital readmissions by up to 20% , leading to significant savings.
    Enhanced Clinical Outcomes Integrates data to identify the most effective treatments for patients. Improves clinical decision-making with real-time insights and evidence-based recommendations.
    Accelerated Medical Research Offers large datasets for faster analysis, cutting clinical trial time and costs. Reduces c linical trial times by 30% and associated costs by 50%.
    Predictive Analytics Forecasts patient needs, improving outcomes and reducing readmissions. Helps optimise resources and reduce readmission rates, improving care and reducing costs.
    Precision Medicine Tailors treatments based on individual characteristics like genetics. Big Data enables more targeted and effective treatment plans.
    Population Health Management Identifies at-risk populations for targeted interventions. Reduces the prevalence of chronic diseases through early detection and personalised care.
    Operational Efficiency Improves processes like inventory management and reduces waste. Enhances resource management, reduces costs, and improves service delivery.

    Data Privacy and Security in Healthcare

    While big data enhances patient care and efficiency, it also brings critical data security challenges. IBM’s 2024 Cost of a Data Breach report highlights the average healthcare breach costs $9.77 million. Protecting patient data is crucial for maintaining trust and avoiding risks.

    Understanding Big Data in Healthcare stats

    Source: Cost of Data Breach Report, IBM

    Key Challenges in Healthcare Data Security

    Several issues make healthcare data security more difficult:

    Challenge Details
    Outdated Systems Older systems may have security gaps that hackers can exploit.
    Weak Passwords Simple or reused passwords make it easier for unauthorised people to access sensitive data.
    Internal Threats Employees or contractors could accidentally or intentionally compromise data security.
    Mobile and Cloud Security As healthcare uses more mobile devices and cloud storage, keeping data safe across different platforms becomes harder.

    With so much data being collected and shared, these challenges are becoming more complex, making it crucial to stay on top of security measures.

    Regulatory Framework: HIPAA and Beyond

    In the U.S., the Health Insurance Portability and Accountability Act (HIPAA) sets the rules for protecting patient data. While HIPAA covers the basics, healthcare organisations need to stay on top of evolving security threats and regulations as technology changes.

    Besides HIPAA, other important regulations include the HITECH Act , which supports the use of electronic health records (EHRs) and strengthens privacy protections, and the General Data Protection Regulation (GDPR) in the European Union, which controls how personal data is used and gives patients more control over their information.

    In our previous blog, The Golden Age of Data in Healthcare , we touched on the challenges that come with using new technologies like blockchain. While blockchain offers secure data storage, it also raises concerns around data ownership and staying compliant with rules like HIPAA and GDPR.

    Solutions to Enhance Healthcare Data Security

    To better protect patient data, healthcare organisations should implement:

    • Data Encryption : Keeps data secure even if intercepted.
    • Multi-Factor Authentication (MFA) : Adds an extra layer of security by requiring more than just a password.
    • System Monitoring and Threat Detection : Monitoring systems for unusual activity helps quickly spot potential breaches.
    • Employee Training : Teaching staff about security best practices and how to spot phishing attempts helps reduce risks.

    By following clear security measures and meeting regulatory requirements, organisations can prevent breaches and keep patient trust intact.

    Enhancing Healthcare Security with Erlang, Elixir, and SAFE

    As we’ve seen, healthcare faces ongoing security challenges such as outdated systems, weak passwords, internal threats, and securing mobile and cloud data. Erlang and Elixir , by their very nature, offer solutions to these problems.

    • Outdated systems: Erlang and Elixir are built for high availability and fault tolerance, ensuring critical healthcare systems remain operational without the risk of system failures, even when legacy infrastructure is involved.
    • Weak passwords & internal threats: Both technologies provide process isolation and robust concurrency, limiting the impact of internal threats and reducing the risk of unauthorised access.
    • Mobile and cloud security: With Erlang and Elixir’s scalability and resilience, securing data across mobile platforms and cloud environments becomes easier, supporting secure, seamless data exchanges.

    To further bolster security, SAFE (Security Audit for Erlang/Elixir) helps healthcare providers identify vulnerabilities in their systems. This service:

    • Identifies vulnerabilities in code that could expose systems to attacks.
    • Assesses risk levels to prioritise fixes.
    • Provides detailed reports that outline specific issues and solutions.

    By combining the inherent security benefits of Erlang and Elixir with the proactive audit capabilities of SAFE, healthcare organisations can safeguard patient data, reduce risk, and stay compliant with regulations like HIPAA.

    Conclusion

    Big data is transforming healthcare by improving patient care and outcomes. However, with this growth comes the need to secure sensitive data and ensure compliance with regulations like HIPAA and GDPR.

    Erlang and Elixir naturally address key security challenges, helping organisations protect patient information. Tools like SAFE identify vulnerabilities, reduce risks, and ensure compliance.

    Ultimately, securing patient data is critical for maintaining trust and delivering quality care. If you would like to talk more about securing your systems or staying compliant, contact our team.

    The post Understanding Big Data in Healthcare appeared first on Erlang Solutions .

    • chevron_right

      Erlang Solutions: Highlights from CodeBEAM Lite London

      news.movim.eu / PlanetJabber • 20 February, 2025 • 6 minutes

    The inaugural CodeBEAM Lite London conference was held at CodeNode last month, featuring 10 talks, 80 attendees, and an Erlang Solutions booth. There, attendees had the chance to set a high score in a BEAM-based asteroid game created by ESL’s Hernan Rivas Acosta, and win an Atari replica.

    Learning from and networking with experts across the BEAM world was an exciting opportunity. Here are the highlights from the talks at the event.

    Keynote: Gleam’s First Year

    Louis Pilfold kicked things off with an opening keynote all about Gleam , the statically-typed BEAM language he designed and developed, and which announced its version 1.0 a year ago at FOSDEM in Brussels.

    Louis laid out the primary goals of v1: productivity and sustainability, avoiding breaking changes and language bloat, and extensive, helpful, and easily navigable documentation. He then walked us through some of the progress made on Gleam in its first year of official release, with a particular focus on the many convenience and quality-of-life features of the language server, written in Rust. Finally, he measured Gleam’s success throughout 2024 in terms of Github usage and sponsorship money and looked forward to his goals for the language in 2025.

    The Art of Writing Beautiful Code

    “Make it work, then make it beautiful, then if you really, really have to, make it fast. 90 per cent of the time, if you make it beautiful, it will already be fast. So really, just make it beautiful!” Most of us are likely familiar with this famous Joe Armstrong quote, but what does it actually mean to write beautiful code?

    This question was the focus of Brujo Benavides’ talk, a tour through various examples of “ugly” code in Erlang, some of which may well be considered beautiful by programmers trying to avoid repeating code. If beauty is in the eye of the beholder, what’s more important is that each project has a consistent definition of what “beautiful” means. Brujo explored different methods of achieving this consistency, and how to balance it with the need for fast commits of important changes in a project.

    Why Livebook is My Dream Data Science Workbench

    Amplified ’s Christopher Grainger took a more cerebral approach to his talk on Livebook, drawing on his background as both a historian and a data scientist to link the collaborative notebook software to a tradition of scientific collaboration dating back thousands of years.

    In his view, the fragmentation of the digital age led to key components of this tradition being lost; he explored how LiveBook’s BEAM architecture brings it closer to being a digital equivalent of real-time collaboration in a lab than prior technologies like Jupyter Notebooks, and what further steps could be taken to get even closer to it in the future.

    Deploying Elixir on Azure With Some Bonus Side Quests

    Matteo Gheri of Pocketworks provided an industrial example of Elixir in action, explaining how his company used Azure in the course of building a Phoenix app for UK-based taxi company Veezu.

    Azure is used to host only 3.2% of Elixir apps, and Matteo walked through their journey figuring it out in detail, touching on deployment, infrastructure, CI/CD, and the challenges they encountered.

    Let’s Talk About Tests

    Erlang Solutions’ own Natalia Chechina took the stage next for a dive into the question of tests. She explored ways of convincing managers of the importance of testing, which types of test to prioritise depending on the circumstances of the project, and how to best structure testing in order to prevent developers from burning out, stressing the importance of both making testing a key component of the development cycle and cultivating a positive attitude towards testing.

    Eat Your Greens: A Philosophy for Language D esign

    Replacing Guillaume Duboc’s cancelled talk on Elixir types was Peter Saxton, developer of a new language called Eat Your Greens (EYG). The philosophy behind the title refers to doing things that may be boring or unenjoyable but which lead to benefits in the long run, such as eating vegetables; Peter cited types as an example of this, and as such EYG is statically, structurally, and soundly typed. He then walked through other main features of his language, such as closure serialisation as JSON, hot code reloading, and the ability for it to be run entirely through keyboard shortcuts.

    Trade-Offs Using JSON in Elixir 1.18: Wrappers vs. Erlang L ibraries

    Michał Muskała has a long history working with JSON on the BEAM, starting with his development of the Jason parser and generator, first released in 2017. He talked us through that history; writing Jason, turning his focus to Erlang/OTP and proposing a JSON module there, and then building on that for the Elixir JSON module, now part of the standard library in 1.18.

    He discussed the features of this new module, why it was better to use wrappers while transitioning to Elixir instead of calling Erlang directly, and how to simplify migration from Jason to JSON in advance of OTP 27 eventually being required by Elixir.

    Distributed AtonVM: Let’s Create Clusters of Microcontrollers

    A useless machine and a tiny, battery-free LED device played central roles in Paul Guyot’s dive into AtomVM , an Erlang- and Elixir-based virtual machine for microcontrollers. He kicked off by demonstrating La machine, the first commercial AtomVM product, albeit without an internet connection, before explaining AtomVM’s intended use in IoT devices, and the recent addition of distributed Erlang. This was backed up by another demonstration, this time of the appropriately named “2.5g of Erlang” device. Finally, he explained AtomVM’s advantages compared to other IoT VMs and identified the next steps for the project.

    Erlang and RabbitMQ: The Erlang AMQP Client in Action

    Katleho Kanyane from Erlang Solutions then provided another industry use case, discussing how he helped to implement a RabbitMQ publisher using the Erlang AMQP client library while working with a large fintech client. Katleho talked through some of the basics of RabbitMQ implementation, best practices, and two issues he ran into involving flow control, an overload prevention feature in RabbitMQ that throttles components and leads to drastically reduced transfer rates. He wrapped up by discussing lessons he learned from the process and laying out a few guidelines for designing a publisher.

    Keynote: Introducing Tau5 – A New BEAM-Powered Live Coding Platform

    The closing keynote was also the only talk of the day to kick off with a music video, though that should be expected when live coding artist and Sonic Pi creator Sam Aaron is the one delivering it. Sam spoke passionately about his goal to make programming something that everyone should be able to try without needing or wanting to become a professional and discussed his history of using Sonic Pi’s live coding software in education, including how he worked some complicated concepts such as concurrency in without confusing students or teachers.

    He then discussed the limitations of Sonic Pi and how they are addressed by his new project, Tau5. While still in the proof-of-concept stage, Tau5 improves on Sonic Pi by being built on OTP from the ground up, being able to run in the browser, and including new features like visuals to add to live performances. He concluded with a demonstration of Tau5 and an explanation of his intentions for the project.

    Final Thoughts

    CodeBEAM Lite London 2025 was a fantastic day filled with fascinating talks, cool demos, and plenty more to excite any BEAM enthusiast. From hearing about the latest Gleam developments to diving into live coding with Tau5, it was clear that the community is thriving and full of creative energy. Whether it was learning tips for practical BEAM use or exploring cutting-edge new tools and languages, there was something for everyone.

    If you missed out this time, don’t worry: you’ll be welcome at the next one, and we hope to see you there. Until then, keep building, keep experimenting, and above all keep having fun with the BEAM!

    The post Highlights from CodeBEAM Lite London appeared first on Erlang Solutions .

    • chevron_right

      Erlang Solutions: DORA Compliance: What Fintech Businesses Need to Know

      news.movim.eu / PlanetJabber • 12 February, 2025 • 7 minutes

    The Digital Operational Resilience Act (DORA) is now in effect as of 17th January 2025, making compliance mandatory for fintech companies, financial institutions, and ICT providers across the UK and EU. With over 22,000 businesses impacted, DORA sets clear expectations for how firms must manage operational resilience and protect against cyber threats.

    As cybercriminals become more sophisticated, regulatory action has followed. DORA is designed to ensure that businesses have the right security measures in place to handle disruptions, prevent data breaches, and stay operational under pressure.

    Yet, despite having time to prepare, 43% of organisations admit they won’t be fully compliant for at least another three months. But non-compliance isn’t just a delay. It comes with serious risks, including penalties and reputational damage.

    So, what does DORA mean for your fintech business? Why is compliance so important, and how can you make sure you meet the requirements?

    What is DORA?

    With technology at the heart of financial services, the risks associated with cyber threats and ICT disruptions have never been higher. The European Parliament introduced the Digital Operational Resilience Act (DORA ) to strengthen the financial sector’s ability to withstand and recover from these digital risks.

    Originally drafted in September 2020 and ratified in 2022, DORA officially came into force in January 2025. It establishes strict requirements for managing ICT risks, ensuring financial institutions follow clear protection, detection, containment, recovery, and repair guidelines.

    A New Approach to Cybersecurity

    This regulation marks a major step forward in cybersecurity, prioritising operational resilience to keep businesses running even in the face of severe cyber threats or major ICT failures. Compliance will be monitored through a unified supervisory approach, with the European Banking Authority (EBA), the European Insurance and Occupational Pensions Authority (EIOPA), and the European Securities and Markets Authority (ESMA) working alongside national regulators to enforce the new standards.

    A report from the European Supervisory Authorities (EBA, EIOPA, and ESMA) highlighted that in 2024, of the registers analysed during a ‘dry run’ exercise involving nearly 1,000 financial entities across the EU, just 6.5% passed all data quality checks . This shows just how demanding the requirements are, and the importance of getting it right early for a smooth path to compliance.

    The Five Pillars of DORA

    DORA introduces firm rules on ICT risk management, incident reporting, resilience testing, and oversight of third-party providers. Rather than a one-size-fits-all approach, compliance depends on factors like company size, risk tolerance, and the type of ICT systems used. However, at its core, DORA is built around five key pillars that form the foundation of a strong operational resilience framework.

    Five Pillars of DORA for business

    Source: Zapoj

    These pillars also serve as the basis for a DORA compliance checklist , which businesses can use to ensure they meet regulatory requirements.

    Below is a breakdown of each pillar and what businesses need to do to comply:

    1. ICT Risk Management

    Businesses must establish a framework to identify, assess, and mitigate ICT risks. This includes:

    • Conducting regular risk assessments to spot vulnerabilities.
    • Implementing security controls to address identified risks.
    • Developing a clear incident response plan to handle disruptions effectively.

    2. ICT-Related Incident Reporting

    Companies must have structured processes to detect, report, and investigate ICT-related incidents. This involves:

    • Setting up clear reporting channels for ICT issues.
    • Classifying incidents by severity to determine response urgency.
    • Notifying relevant authorities promptly when serious incidents occur.

    3. Digital Operational Resilience Testing

    Financial institutions are required to test their ICT systems regularly to ensure they can withstand cyber threats and operational disruptions . This includes:

    • Running simulated attack scenarios to test security defences.
    • Assessing the effectiveness of existing resilience measures.
    • Continuously improving systems based on test results.

    4. ICT Third-Party Risk Management

    DORA highlights the importance of managing risks linked to third-party ICT providers . Businesses must:

    • Conduct due diligence before working with external service providers.
    • Establish contractual agreements outlining security expectations.
    • Continuously monitor third-party performance to ensure compliance.

    5. Information Sharing

    Collaboration is a key part of DORA, with financial institutions encouraged to share cyber threat intelligence . This may include:

    • Participating in industry forums to stay informed about emerging threats.
    • Sharing threat intelligence with peers to strengthen collective defences.
    • Conducting joint cybersecurity exercises to improve incident response.

    By following these five pillars, businesses can build a strong foundation for digital resilience . Compliance isn’t just about meeting regulatory requirements, it’s about safeguarding operations, protecting customers, and strengthening the financial sector against growing cyber threats.

    How to Achieve DORA Compliance for Your Business

    Regardless of the stage of compliance a business is in, there are a few key areas that must be focused on to protect themselves. Here’s what you need to do:

    Understand DORA’s Scope and Requirements

    The first step to DORA compliance is understanding what’s required. Take the time to familiarise yourself with its requirements and ask any questions.

    Conduct a Risk Assessment

    A solid risk assessment is at the heart of DORA compliance. Identify and evaluate risks across your ICT systems—this includes everything from cyber threats to software glitches. Understanding these risks helps you plan how to minimise their impact on your operations.

    Create a Resilience Strategy

    With your risk assessment in hand, develop a tailored resilience strategy. This should include:

    • Preventive Measures : Set up cyber defences and redundancy systems to prevent disruptions.
    • Detection Systems : Ensure you can quickly spot any anomalies or threats.
    • Response and Recovery Plans : Have clear plans in place to respond and recover if an incident happens.

    Invest in Cybersecurity and IT Infrastructure

    To meet DORA compliance for business, invest in strong cybersecurity tools like firewalls and encryption. Ensure your IT infrastructure is resilient, with reliable backup and recovery systems to minimise disruptions.

    Strengthen Incident Reporting

    DORA stresses the importance of quick and accurate incident reporting. Establish clear channels for detecting and reporting ICT incidents, ensuring timely updates to authorities when needed.

    Build a Culture of Resilience

    Resilience is an ongoing effort. To stay compliant, create a culture where resilience is top of mind:

    • Provide regular staff training .
    • Regularly test and audit your systems.
    • Stay updated on emerging risks and technologies.

    Partner with IT Experts

    DORA compliance can be tricky, especially if your team lacks in-house expertise. Partnering with IT service providers who specialise in compliance can help you meet DORA’s requirements more smoothly.

    Consequences for Non-Compliance

    We’ve already established the importance of meeting DORA’s strict mandates. But failing to comply with these regulations can have serious consequences for businesses- from hefty fines to operational restrictions. Here’s what businesses need to be aware of to protect their organisation:

    Fines for Non-Compliance

    • Up to 2% of global turnover or €10 million, whichever is higher, for non-compliant financial institutions.
    • Third-party ICT providers could face fines as high as €5 million or 1% of daily global turnover for each day of non-compliance.
    • Failure to report major incidents within 4 hours can lead to further penalties.

    Reputational Damage and Leadership Liability

    • Public notices of breaches can cause lasting reputational damage, affecting business trust and relationships.
    • Business leaders can face personal fines of up to €1 million for failing to ensure compliance.

    Operational Restrictions

    • Regulators can limit or suspend business activities until compliance is achieved.
    • Data traffic records can be requested from telecommunications operators if there’s suspicion of a breach.

    How Erlang Solutions Can Help You with DORA Compliance

    Don’t panic, prioritise. If you’ve identified that your business may be at risk of non-compliance, taking action now is key. Erlang Solutions can support you in meeting DORA’s requirements through our Security Audit for Erlang and Elixir (SAFE) .

    With extensive experience in the financial sector, we understand the critical need for resilient, scalable systems. Our expertise with Erlang and Elixir has helped leading fintech institutions, including Klarna, Vocalink, and Ericsson , build fault-tolerant, high-performing and compliant systems.

    SAFE is aligned with several key areas of DORA, including ICT risk management, resilience testing, and third-party risk management:

    • Proactive Risk Identification and Mitigation : SAFE identifies vulnerabilities and provides recommendations to address risks before they become critical. This proactive approach supports DORA’s requirements for continuous ICT risk management.
    • Continuous Monitoring Capabilities : SAFE allows ongoing monitoring of your systems, which aligns with DORA’s emphasis on continuous risk detection and mitigation.
    • Detailed Incident Response Recommendations : SAFE’s detailed findings help you refine your incident response and recovery plans, ensuring your systems are prepared to quickly recover from cyberattacks or disruptions.

    Third-Party Risk Management : The security audit can provide insights into your third-party integrations, helping to ensure they meet necessary security standards and comply with DORA’s requirements.

    Conclusion

    DORA compliance is now in effect, making it essential to act if your business isn’t fully compliant. Delays can lead to penalties and increased risk exposure. Prioritising ICT risk management, strengthening resilience, and ensuring proper incident reporting will bring you closer to compliance. But this isn’t just about meeting requirements, it’s about safeguarding your organisation and building long-term operational resilience.

    If you have compliance concerns or just want to talk through your next steps, we’re here to help. Contact us to talk through your options.

    The post DORA Compliance: What Fintech Businesses Need to Know appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/dora-compliance-what-fintech-businesses-need-to-know/

    • chevron_right

      Erlang Solutions: Understanding Digital Wallets

      news.movim.eu / PlanetJabber • 23 January, 2025 • 7 minutes

    Digital wallets, once considered futuristic, have now become essential tools for both consumers and businesses. But what are digital wallets , and why should you care about them? Customer expectations are changing. Many companies are turning to them to streamline transactions and enhance the customer experience

    This guide unpacks the fundamentals of digital wallets, highlighting their benefits, market trends, and implications for businesses.

    What Are Digital Wallets?

    Digital wallets (or e-wallets) have changed the way we make and receive payments. By 2025, digital payments are expected to account for 50% of global payments .

    At their core, digital wallets store a user’s payment information, securely encrypted for seamless transactions. This could involve credit card details, bank accounts, or even cryptocurrencies.

    Apple Pay , Google Wallet , PayPal , and Samsung Pay have become household names, but the ecosystem is much broader and growing rapidly as more industries recognise their potential. Digital wallets simplify purchases and integrate with loyalty programmes, personal finance management, and even identity verification , offering a comprehensive solution for consumers and businesses alike.

    How Do Digital Wallets Work?

    Digital wallets offer a secure and straightforward way to manage transactions. In a time when data breaches are increasingly common, security has never been more important. With cybercrime damages projected to reach $10.5 trillion annually in 2025 , they play a major role in keeping financial information safe.

    Here’s how they work. First, you link your financial details to the wallet. This could mean adding a credit card or connecting a bank account. Once your details are in, the wallet uses encryption and tokenisation to protect your sensitive information, converting it into a secure format that’s almost impossible for unauthorised parties to access.

    When you make a payment, the process is quick and simple: tap, scan, or click. Behind the scenes, your digital wallet securely communicates with the payment processor to authorise the transaction. With advanced security measures like encryption and tokenisation, digital wallets not only reduce the risk of fraud but also allow for a seamless and reliable user experience.

    Types of Digital Wallets

    Now let’s explore the various types of digital wallets available:

    1. Closed wallets

    Amazon closed wallets example, Understanding Digital Wallets

    2. Semi-closed wallets

    Semi-closed wallets like Paytm or Venmo, allow payments at select merchant locations or online stores that accept their platform.

    Venmo semi-closed wallets example, Understanding Digital Wallets

    3. Open wallets

    Backed by major financial institutions, open wallets allow broader transactions, including withdrawals, online purchases, and transfers. Popular examples include PayPal and Google Pay .

    4. Prepaid Wallets

    Prepaid wallets let you load funds in advance, so you use only what’s available. Once the balance is depleted, you just reload the wallet. This approach is great for budgeting.

    Choosing the right digital wallet depends on your business model.

    Whether you’re looking for customer loyalty through closed wallets or broader international reach with open wallets, selecting the right type will drive better engagement and efficiency.

    Why Should Businesses Care?

    The rise of digital wallets represents a strategic opportunity for businesses to serve their customers better and improve their bottom line. Here’s why:

    Enhanced customer experience

    Digital wallets streamline the checkout process, reducing friction and improving customer satisfaction. Features like one-click payments and loyalty integrations can drive repeat business.

    Improved security

    Tokenisation and encryption reduce the risks associated with traditional payment methods. This not only protects users but also helps businesses build trust.

    Cost efficiency

    Payment processors for digital wallets often charge lower fees than those for traditional credit card transactions, which can run as high as 3%. Depending on the provider, digital wallets can significantly cut these costs.

    Global reach

    For companies aiming to expand internationally, digital wallets simplify cross-border transactions by supporting multiple currencies.

    Digital wallets offer tangible benefits: enhanced customer experience, improved security, and cost efficiency. Businesses that integrate them can streamline payments and improve retention and satisfaction, driving growth.

    Integrating Digital Wallets into Your Business

    Before jumping into digital wallets, it’s worth taking a moment to plan things out. A bit of strategy can go a long way.

    Here are some key things to keep in mind:

    • Know what your customers want : Look at your data or run a quick survey to find out which wallets your customers use most.
    • Pick the right payment processor : Go for a provider that supports lots of wallets. This gives you flexibility and makes it easier to grow.
    • Focus on security : Work with experts, like Erlang Solutions , to help build secure systems that keep data safe and meet the necessary guidelines around payments.
    • Test, optimise and refine : Start with a proof of concept to see how things work. We can help you get this done quickly so you can adjust and stay ahead of the game.

    By understanding what your customers need and choosing flexible payment options, you can bring digital wallets into your business without any hiccups. Picking the right tech also means your operations keep running smoothly while you embrace innovations.

    Challenges and Considerations

    While digital wallets offer numerous benefits, they’re not without challenges:

    • Adoption barriers : Older demographics or tech-averse users may still prefer traditional payment methods. According to AARP , about 50% of older adults in the U.S. feel uncomfortable with new payment technologies. Businesses need strategies to educate and ease this transition.
    • Risk of fraud : While secure, digital wallets are not immune to hacking or phishing attacks. Companies must ensure continuous security updates and user education on best practices.
    • Regulatory compliance : Navigating the global landscape of payment regulations can be complex. From GDPR to PSP2 , businesses must comply with relevant laws, especially when handling international transactions.

    While digital wallets offer advantages, businesses must address adoption barriers, security concerns, and regulatory compliance. Preparing for these challenges allows for a smooth transition and mitigates potential risks.

    Industries Using Digital Wallets

    We’ve established how digital wallets are revolutionising the way we handle payments, making transactions faster, safer, and more convenient. There are some industries to highlight that are making the most of this technology.

    Fintech

    In the fintech world, digital wallets have become indispensable. For instance, Erlang Solutions collaborated with TeleWare to enhance their Re:Call app with secure instant messaging capabilities for a major UK financial services group. By integrating MongooseIM, they ensured compliance with strict regulatory requirements while improving user experience.

    Teleware industries using Fintech wallets


    E-commerce

    Online shopping has been transformed by digital wallets. In 2021, a quarter of all UK transactions were made using digital wallets, and this trend is expected to grow by 18.9% through 2028. Features like biometric authentication not only make the checkout process quicker but also enhance security, leading to happier customers and increased loyalty.

    Gaming

    Gamers love convenience, and digital wallets deliver just that.

    By consolidating various payment methods, wallets like PayPal and Google Pay make in-game purchases seamless. This ease of use not only reduces transaction fees but also keeps players engaged, boosting customer retention.

    Banking

    Traditional banks are catching up by integrating digital wallets into their services. These wallets often combine payment processing with features like loyalty programmes and travel card integration. Advanced security measures, including biometric authentication, ensure that customers feel secure while enjoying personalised, cashless payment solutions.

    The Future of Digital Wallets

    The future of digital wallets lies in innovation.

    Here are just some of the trends we are poised to see shape the landscape in the next few years:

    • Integration with wearable tech: Smartwatches and fitness trackers will make payments even more convenient.
    • Biometric authentication : Consumers increasingly demand convenience without sacrificing security. Biometric features such as fingerprint recognition, voice ID, and facial scans will become commonplace, providing higher protection.
    • Cryptocurrency support : As digital currencies gain acceptance, more wallets are supporting crypto transactions. With over 300 million cryptocurrency users worldwide, businesses must be ready to accommodate this growing market.

    You can explore even more key digital payment trends here .

    Staying ahead of these trends will position your business as a forward-thinking leader in the digital economy.

    To conclude

    Digital wallets aren’t just another way to pay; they’re a game-changer for improving customer experience, boosting security, and driving growth. Nearly half the world’s consumers are already using them, and with transaction values expected to hit over $10 trillion by 2026, they’re becoming a must-have for businesses.

    The big question for leaders isn’t whether to integrate them, but how to do it right. Now’s the perfect time to get started. By focusing on secure tech, understanding your customers, and keeping an eye on trends, you can unlock massive benefits. Erlang Solutions has the expertise to help you build digital wallet solutions that are secure and scalable. Ready to chat about your strategy? Drop us a message today .


    The post Understanding Digital Wallets appeared first on Erlang Solutions .

    • chevron_right

      ProcessOne: How Big Tech Pulled Off the Billion-User Heist

      news.movim.eu / PlanetJabber • 16 January, 2025 • 10 minutes

    How Big Tech Pulled Off the Billion-User Heist

    For many years, I have heard countless justifications for keeping messaging systems closed. Many of us have tried to rationalize walled gardens for various reasons:

    • Closed messaging systems supposedly enable faster progress, as there’s no need to collaborate on shared specifications or APIs. You can change course more easily.
    • Closed messaging systems are better for security, spam, or whatever other risks we imagine, because owners feel they have better control of what goes in and out.
    • Closed messaging systems are said to foster innovation by protecting the network owner’s investments.

    But is any of this really true? Let’s take a step back and examine these claims.

    A Brief History of Messaging Tools

    Until the 1990s, messaging systems were primarily focused on building communities. The dominant protocol of the time was IRC (Internet Relay Chat) . While IRC allowed private messaging, its main purpose was to facilitate large chatrooms where people with shared interests could hang out and interact.

    In the 1990s, messaging evolved into a true communication tool, offering an alternative to phone calls. It enabled users to stay in touch with friends and family while forging new connections online. With the limitations of the dial-up era, where users weren’t always connected, asynchronous communication became the norm. Features like offline messages and presence indicators emerged, allowing users to see at a glance who was online, available, or busy.

    The revolution began with ICQ , quickly followed by competitors like Yahoo! Messenger and MSN Messenger . However, this proliferation of platforms created a frustrating experience: your contacts were spread across different networks, requiring multiple accounts and clients. Multiprotocol clients like Meebo and Pidgin emerged, offering a unified interface for these networks. Still, they often relied on unofficial protocol implementations, which were unreliable and lacked key features compared to native clients.

    To address these issues, a group of innovators in 1999 set out to design a better solution—an open instant messaging protocol that revolved around two fundamental principles:

    1. Federation : A federated protocol would allow users on any server to communicate seamlessly with users on other servers. This design was essential for scalability, as supporting billions of users on a single platform was unimaginable at the time.
    2. Gateway Support : The protocol would include gateways to existing networks, enabling users to connect with contacts on other platforms transparently, without needing to juggle multiple applications. The gateways were implemented on the server-side, allowing fast iterations on gateway code.

    This initiative, originally branded as Jabber , gave rise to XMPP (Extensible Messaging and Presence Protocol) , a protocol standardized by the IETF. XMPP gained traction, with support from several open-source servers and clients. Major players adopted the protocol—Google for Google Talk and Facebook for Facebook Messenger , enabling third-party XMPP clients to connect to their services. The future of open messaging looked promising.

    Fast Forward 20 Years

    Today, that optimism has faded. Few people know about XMPP or its newer counterpart, Matrix. Google’s messaging services have abandoned XMPP, Facebook has closed its XMPP gateways, and the landscape has returned to the fragmentation of the past.

    Instead of Yahoo! Messenger and MSN, we now deal with WhatsApp , Facebook Messenger , Telegram , Google Chat , Signal , and even messaging features within social networks like Instagram and LinkedIn. Our contacts are scattered across these platforms, forcing us to switch between apps just as we did in the 1990s.

    What Went Wrong?

    Many of these platforms initially adopted XMPP, including Google, Facebook, and even WhatsApp. However, their focus on growth led them to abandon federation. Requiring users to create platform-specific accounts became a key strategy for locking in users and driving their friends to join the same network. Federation, while technically advantageous, was seen as a barrier to user acquisition and growth.

    The Big Heist

    The smartphone era marked a turning point in messaging, fueled by always-on connectivity and the rise of app stores. Previously, deploying an app at scale required agreements with mobile carriers to preload the app on the phones they sold. Carriers acted as gatekeepers, tightly controlling app distribution. However, the introduction of app stores and data plans changed everything. These innovations empowered developers to bypass carriers and build their own networks on top of carrier infrastructure—a phenomenon known as over-the-top (OTT) applications .

    Among these new apps was WhatsApp , which revolutionized messaging in several ways. Initially, WhatsApp relied on Apple’s Push Notification Service to deliver messages in real time, bypassing the need for a complex infrastructure at launch. Its true breakthrough, however, was the decision to use phone numbers as user identifiers —a bold move that set a significant precedent. At the time, most messaging platforms avoided this approach because phone numbers were closely tied to SMS, and validating them via SMS codes came with significant costs.

    WhatsApp cleverly leveraged this existing, international system of telecommunication identifiers to bootstrap its proprietary network. By using phone numbers, it eliminated the need for users to create, manage and share separate accounts, simplifying onboarding. WhatsApp also capitalized on the high cost of SMS at the time. Since short messages were often not unlimited, and international SMS was especially expensive, many users found it cheaper to rely on data plans or Wi-Fi to message friends and family—particularly across borders.

    When we launched our own messaging app, TextOne (now discontinued), we considered using phone numbers as identifiers but ultimately decided against it. Forcing users to disclose such personal information felt intrusive and misaligned with privacy principles. By then, the phone had shifted from being a shared household device to a deeply personal one, making phone numbers uniquely tied to individual identities.

    Later, Whatsapp launched its own infrastructure based on ejabberd, but they kept their service closed. At that time, we also considered using phone number when launching our own messaging app, the now discontinued TextOne, but refused to use that. It did not feel right, as you were forcing people to disclose an important private information. As the phone had become a personnal device, instead of a household device, the phone number played the role of unique identifier for a single individual.

    Unfortunately, most major players seeking to scale their messaging platforms adopted the phone number as a universal identifier. WhatsApp’s early adoption of this strategy helped it rapidly amass a billion users, giving it a decisive first-mover advantage. However, it wasn’t the only player to recognize and exploit the power of phone numbers in building massive-scale networks. Today, the phone number is arguably the most accurate global identifier for individuals, serving as a cornerstone of the flourishing data economy.

    What’s Wrong With Using Phone Numbers as IDs?

    Phone numbers are a common good —a foundation of global communication. They rely on the principle of universal accessibility: you can reach anyone, anywhere in the world, regardless of their phone provider or location. This system was built on international cooperation, with a branch of the United Nations playing a key role in maintaining a provider-agnostic, interoperable platform. At its core is a globally unique phone numbering system, created through collaborative standards and protocols.

    However, over-the-top (OTT) companies have exploited this infrastructure to build private networks on top of the public system. They’ve leveraged the universal identification scheme of phone numbers—and, by extension, the global interoperable network—to construct proprietary, closed ecosystems.

    To me, this feels like a misuse of a common good. Phone numbers, produced through international cooperation, should not be appropriated freely by private corporations without accountability. While it may be too late to reverse this trend, we should consider a contribution system for companies that store and use phone numbers as identifiers.

    For example, companies that maintain databases with millions of unique phone numbers could be required to pay an annual fee for each phone number they store. This fee could be distributed to the countries associated with those numbers. Such a system would achieve two things:

    1. Encourage Accountability : Companies would need to evaluate whether collecting and storing phone numbers is truly essential for their business. If the data isn’t valuable enough to justify the cost, they might choose not to collect it.
    2. Promote Fairness : For companies that rely heavily on phone numbers to track, match, and build private, non-interoperable services, this fee would act as a fair contribution, akin to taxes paid for using public road infrastructure.

    It looks a lot to me that the phone number is a common good produced and use by international cooperation. It is too late to prevent it to be used by Big Tech companies. However, it may seem fair to imagine a contribution from company storing phone number. This is a data that is not their property and not theirs to use. Shouldn&apost we consider a tax on phone numbers storage and usage ? For example, if a company store a millions unique phone number in their database, why not require a yearly fee, to be paid to each country that any phone number is associated to, one yearly fee per phone number ?

    Company would have to think twice about storing such personnal data. Is it valuable for your business ? If it is not valuable enough, fair enough, delete them and do not ask them, but if you need it to trakt and match user and build a private non interoperable service, then paying a fair contribution for their usage should be considered. It would be like the tax they pay to leverage road infrastructure in countries where they operate.

    Beyond Taxes: The Push for Interoperability

    Of course, a contribution system alone won’t solve the larger issue. We also need a significant push toward interoperable and federated messaging . While the European Digital Markets Act (DMA) includes an interoperability requirement, it doesn’t go far enough. Interoperability alone cannot address the challenges of closed ecosystems.

    I’ll delve deeper into why interoperability must be paired with federation in a future article, as this is a critical piece of the puzzle.

    Interoperability vs. Velocity

    To conclude, I’d like to reference the introduction of the IETF SPIN draft , which perfectly encapsulates the trade-offs between interoperability and innovation:

    Voice, video and messaging today is commonplace on the Internet, enabled by two distinct classes of software. The first are those provided by telecommunications carriers that make heavy use of standards, such as the Session Initiation Protocol (SIP) [RFC3261]. In this approach - which we call the telco model - there is interoperability between different telcos, but the set of features and functionality is limited by the rate of definition and adoption of standards, often measured in years or decades. The second model - the app model - allows a single entity to offer an application, delivering both the server side software and its corresponding client-side software. The client-side software is delivered either as a web application, or as a mobile application through a mobile operating system app store. The app model has proven incredibly successful by any measure. It trades off interoperability for innovation and velocity.

    The downside of the loss of interoperability is that entry into the market place by new providers is difficult. Applications like WhatsApp, Facebook Messenger, and Facetime, have user bases numbering in the hundreds of millions to billions of users. Any new application cannot connect with these user bases, requiring the vendor of the new app to bootstrap its own network effects.

    This summary aligns closely with the ideas I’ve explored in this article.

    I believe we’ve reached a point where we need interoperability far more than continued innovation in voice, video, and messaging. While innovation in these areas has been remarkable, we have perhaps been too eager—or too blind—to sacrifice interoperability in the name of progress.

    Now, the pendulum is poised to swing back. Centralization must give way to federation if we are to maintain the universality that once defined global communication. Without federation, there can be no true global and universal service, and without universality, we risk regressing, fragmenting all our communication systems into isolated and proprietary silos.

    It’s time to prioritize interoperability, to reclaim the vision of a truly connected world where communication is open, accessible, and universal.

    • wifi_tethering open_in_new

      This post is public

      www.process-one.net /blog/how-big-tech-pulled-off-the-billion-user-heist/

    • chevron_right

      ProcessOne: Fluux multiple Subscriptions/Services

      news.movim.eu / PlanetJabber • 15 January, 2025

    Fluux is our ejabberd Business Edition cloud service. With a subscription, we deploy, manage, update and scale an instance of our most scalable messaging server. Up to now, if you wanted to deploy several services, you had to create another account with a different email. Starting today, you can manage and pay for different servers from a single Fluux account.

    Here is how to use that feature. On Fluux dashboard main page after the list of your service/platforms you may have noticed a "New" button.

    alt

    You will be then redirected on a page to choose your plan.

    alt

    Once terms and conditions are approved, you will be able to fill your card information on a page hosted by our payment provider.

    alt

    When payment is succeeded, you will be then redirected to Fluux console and a link create your service:

    alt

    On this last page you will be able to provide a technical name that will be used to provision your Fluux service.

    alt

    After 10 minutes you can enjoy your new service at techname.m.in-app.io (such test1.m.in-app.io in above screenshot)

    • wifi_tethering open_in_new

      This post is public

      www.process-one.net /blog/fluux-multiple-subscriptions-services/