In August 2022 the Office of the National Coordinator for Health Information Technology (ONC) launched the 2022 Public Health Data Systems Task Force as a subcommittee of the Health Information Technology Advisory Committee (HITAC). The task force will meet through the beginning of November to present recommendations continuing and building upon the work of the 2021 task force. Members of the task force include individuals from various levels of government, relevant public health associations, and industry partners. Specifically, the task force is focused on the certification criteria for EHR products certified under the ONC Health IT Certification Program that cover transmission of data from EHRs to public health in these domains...
Platform businesses scale differently than traditional businesses. Platforms scale through network effects. In the previous post, we introduced and described a widely used metaphor: pipes vs. platforms. Traditional businesses are pipes. Their value chains are linear. Value is added at sequential stages before a final product or service is delivered to consumers at the end of the pipeline. Platforms do not produce goods or services themselves—they make connections among stakeholders and facilitate value exchange among those stakeholders. Value is created outside the platform. Both pipeline businesses and platform businesses strive to achieve scale—but the type of scale they strive for is vastly different. In this post, we’ll explain how pipeline businesses strive for economies of scale (on the supply side) and how platform businesses scale through network effects (on the demand side).
In the golden age of data analysis, open source communities are not exempt from the frenzy around getting some big, fancy numbers onto presentation slides. Such information can bring even more value if you master the art of generating a well-analyzed question with proper execution. You might expect me, a data scientist, to tell you that data analysis and automation will inform your community decisions. It's actually the opposite. Use data analysis to build on your existing open source community knowledge, incorporate others, and uncover potential biases and perspectives not considered. You might be an expert at implementing community events, while your colleague is a wiz at all things code. As each of you develops visualizations within the context of your own knowledge, you both can benefit from that information.
Where do people come together to make cutting-edge invention and innovation happen?....What of open source software? Certainly, major projects are highly collaborative. Open source software also supports the kind of knowledge diffusion that, throughout history, has enabled the spread of at least incremental advances in everything from viticulture to blast furnace design in 19th-century England. That said, open source software, historically, had a reputation primarily for being good enough and cheaper than proprietary software. That's changed significantly, especially in areas like working with large volumes of data and the whole cloud-native ecosystem. This development probably represents how collaboration has trumped a tendency towards incrementalism in many cases. IP concerns are primarily handled in open source software—occasional patent and license incompatibility issues notwithstanding.
The HLN Consulting team attended the HL7 36th Annual Plenary & Working Group Meeting (WGM) held in Baltimore, MD, September 17 – 23, 2022. More than 500 attendees, representing all aspects of the industry, were a part of the WGM in-person meeting after 2 years of virtual meetings. The seven day event started on Saturday with a weekend connectathon. This meeting offered an opportunity for attendees to come together and collaborate. It was a valuable meeting especially for people involved in standards development around healthcare.
Open source is a flourishing and beneficial ecosystem that publicly solves problems in communities and industries using software developed through a decentralized model and community contributions. Over the years, this ecosystem has grown in number and strength among hobbyists and professionals alike. It's mainstream now—even proprietary companies use open source to build software. With the ecosystem booming, many developers want to get in and build new open source projects. The question is: How do you achieve that successfully? This article will demystify the lifecycle and structure of open source projects. I want to give you an overview of what goes on inside an open source project and show you how to build a successful and sustainable project based on my personal experience.
Open source software (OSS), once a niche segment of the development landscape, is now ubiquitous. This growth is fantastic for the open source community. However, as the usage of OSS increases, so do concerns about security. Especially in mission-critical applications— think medical devices, automobiles, space flight, and nuclear facilities—securing open source technology is of the utmost priority. No individual entity, whether developers, organizations, or governments, can single-handedly solve this problem. The best outcome is possible when all of them come together to collaborate. The Open Source Security Foundation (OpenSSF) formed to facilitate this collaboration.
When I was around 5 years old, my father brought home our first computer. From that moment on, I knew I wanted to pursue a career in computers. I haven't stopped hanging around them since. During high school, when considering which specific area I wanted to focus on, I started experimenting with hacking, and that was the moment I decided to pursue a career as a security engineer. I'm now a software engineer on the security compliance team. I've been at Red Hat for over two years, and I work remotely in the Czech Republic. Outside of my day job, I play blind football, and I'm involved in various projects connecting visually impaired and sighted people together, including working in a small NGO that runs activities for blind and visually impaired people. I'm also working on an accessible Fedora project currently called Fegora, an unofficial Linux distribution aimed at visually impaired users.
Open source is critical in data analysis while providing long-term benefits for the users, community members, and business.
Amazing though it may seem, we each experience the world differently. That's one reality with over 6 billion interpretations. Many of us use computers to broaden our experience of the world, but a computer is part of reality and so if you experience reality without, for instance, vision or sound, then you also experience a computer without vision or sound (or whatever your unique experience might be). As humans, we don't quite have the power to experience the world the way somebody does. We can mimic some of the surface-level things (I can close my eyes to mimic blindness, for example) but it's only an imitation, without history, context, or urgency. As a result of this complexity, we humans design things primarily for ourselves, based on the way we experience the world. That can be frustrating, from an engineering and design viewpoint, because even when you intend to be inclusive, you end up forgetting something "obvious" and essential, or the solution to one problem introduces a problem for someone else, and so on. What's an open source enthusiast, or programmer, or architect, or teacher, or just everyday hacker, supposed to do to make software, communities, and processes accessible?