Steve Cote

Steve Cote

Steve has been a systems architect for the telecommunications and electric utility industries creating circuit provisioning systems and smart grid solutions. He is a hands-on architect working with teams to deliver large complex projects. More recently, Steve has been consulting agile teams from Fortune 15 organizations, start-ups, and everything in-between to be more productive with DevOps and agile practices.

Friday, 16 August 2019 14:46

Run Tagged Tests in Maven

It is common to place a mix of tests in your projects. Some run fast, some slow and others are more integration testing than unit testing. This can cause problems in DevOps practices when you need to compile a project with a simple change and don't want to wait 15 minutes for all the tests to run. This is not an example of fast feedback. Here is how you call a Maven build process in a way to run just the tests you want.

Tuesday, 15 January 2019 16:17

Adding New Types to Sparx EA

If you are creating class or database models, you probably don't have a complete list of data types; "string" for example. This is how to add data types to your model in Sparx Enterprise Architect.

Monday, 14 January 2019 00:11

Project X

A coworker handed this to me a few years back when we were conducting technical interviews for developers and asked if I could guess what the output would be.

Friday, 04 January 2019 15:13

Note-Taking for Consultants

Whenever starting new projects, note-taking skills are important. Each meeting can yield volumes of information and recording that information is important, but what is more important is being able to retrieve that information. This article covers a few thoughts on how to structure your notes for quicker information retrieval.

Wednesday, 28 November 2018 18:27

Blackbird

The Blackbird project is a group of Add-Ins for Sparx Enterprise Architect (EA) which allow the analyst/architect to integrate UML (Unified Modeling Language) to a variety of tasks.

Reliability

Blackbird RA as an add-in which allows an analyst to perform Failure Mode and Effect Analysis on system designs. It was started as a Proof of Concept for the Enterprise Architecture department of a Fortune 15 pharmaceutical company to enable architects to spot reliability issues in systems design. The PoC code was refactored and re-written from scratch to provide a stable starting point for the project.

For more information on performing Failure Mode and Effect Analysis, see the article entitled Quantitative Reliability Assessment. It describes how to increase the reliability in complex software systems through the use of FMEA.

Model

An architect models a process to identify the involved design elements. This is usually a common step to any design and you can use your existing models. The architect then selects each of the components in those modeled processes and models the Failure Modes for each design element by diagramming the Failure Modes and associating them to the element. Each Failure Mode is then given a set of Tagged Values recording its Severity, Probability, and the chance of Detection. Once a scored Failure Mode is associated with a design element, the element is considered a profiled item.

Report

The architect can then request the Reliability add-in to scan through the model, find all the design elements which have Failure Modes associated to them, then for each one, calculate that elements Risk Priority Number based on the Failure Modes associated to it.

The result is a documented design and/or process flow and a list of the related design elements each with a Risk Priority Number (RPN).

Cloud Modeler

The Blackbird Cloud Modeler add-in allows an analyst to connect to different cloud platforms such as Azure, Google, Amazon, and others to collect various infrastructure data as standard UML models.

This project is still in the development stages.

Wednesday, 28 November 2018 17:41

DataFrame

Data Frame is a compact, efficient, hierarchical, self-describing and utilitarian data format with the ability to marshal other formats.

This toolkit was conceived in 2002 to implement the Data Transfer Object (DTO) design pattern in distributed applications; passing a DataFrame as both argument and return values in remote service calls. Using this implementation of a DTO allowed for more efficient transfer of data between distributed components, reducing latency, improving throughput and decoupling not only the components of the system, but moving business logic out of the data model.

A DataFrame can also be used as a Value Object design pattern implementation, providing access to data without carrying any business logic. Value Objects tend to make service interfaces simpler and can many times reduce the number of calls to remote services through the population of the entire data model thereby avoiding multiple calls to the service. Value Objects can also be used to reduce the number of parameters on service methods.

DataFrame uses a binary wire format which is more efficient to parse and transfer. Marshaling occurs only when the data is accessed so unnecessary conversions are avoided.

Prerequisites:

  • JDK 1.8 or later installed
  • Ability to run bash scripts
  • Assumes you do not have gradle installed (if you do, you can replace gradlew with gradle)
Wednesday, 28 November 2018 17:35

Coyote Loader

The Coyote Loader is a toolkit and a framework for loading components in a JRE and managing their life cycle. It supports multi-threading, but in a very simplistic manner to keep maintainability as high as possible.

This is an evolving project which provides a starting point for creating a loader in a variety of other projects. That is to say, this loader project will be used as a starting point for the loader of other projects and will, therefore, have rather unique adaptability requirements. For example, this loader framework must operate on embedded devices and there are currently several embedded Java projects underway to which this project will contribute.

Why

By separating the Loader out into a separate project it is easier to focus on just the design and test of component loading without the distraction of the system as a whole. The hope is that a very flexible component loader can be developed and applied to several projects currently being developed. This framework will be tested and developed separately and merged into other projects when they are ready for implementing a command line loader.

Other container projects are far too complex for our needs as they try to be everything for everyone. This is a purposed built toolkit for a specific set of needs.

12 Factor Applications

The loader solves several of our problems for our scalable 12-factor applications. Everything is self-contained in our applications and reliance on an external container is eliminated. This means the loader can be used to stand up a complete running instance without external containers or other frameworks. Our Heroku slug sizes are significantly smaller than those with Jetty, Spring, Tomcat or other frameworks included. Because this was built to support deployment on single board computers (SBC) and embedded systems (e.g. field sensors), our cloud deployment footprints benefitted.

The encryption is completely pluggable, allowing any library to be used through a simple interface. The encryption algorithm and keys can be specified in environment variables, another tenant of a 12-factor application.

Environment variables are leveraged in the configuration and templating tools further reducing the reliance on file systems and assisting the developer ensure each environment is (development, test, quality assurance, certification, production, etc.) are configured correctly and project artifacts such as configuration files do not "point" to the wrong locations or backing services. Loggers tie into backing log streams allowing further independence from the ephemeral file systems used in many cloud infrastructures. While you can use a local file system by default, it is rather easy to send log events externally by simply using a different appender, configured (of course) through environment variables.

The Loader uses a set of JRE shutdown hooks to help ensure graceful shutdown when the SIGTERM event is caught. The Loader then calls the shutdown and terminate methods on all the components giving them a chance to gracefully terminate and will help the application handle life in the cloud as applications are terminated and moved to support scaling operations.

Coyote Loader allows us to create 12-factor applications which maximize automation, offer maximum portability, can be deployed on modern cloud platforms, allow for continuous deployment, and scale quickly.

Project Goals

This is a prototyping project which will be used to drive a loader for a set of IoT (Internet of Things) projects. It, therefore, must support traditional platforms (e.g. server installations) and the restricted resources of embedded systems. It, therefore, must not rely on classes or libraries which may not be available in JRE images with limited libraries.

  • Small Footprint - Forego larger, general purpose libraries for simple, purpose-driven code. Resources spent on storing unused code are resources taken away from application data.
  • Portability - Usable on as many publicly available embedded systems platforms as possible. If it runs Java, it should be able to run this loader.
  • Simplicity over Elegance - Maintainability of the code is key to stable systems, this project uses simple concepts and plainly written code (and comments) so bugs are easier to spot and fix.
  • Security Built-In, not Bolted-On, working in the utility industry has made it clear that security should be first on your requirements list and development plan.
  • Micro Services - No need for containers and complex frameworks to expose your components through secure ReST APIs.
  • 12-Factor Ready - Tools support all tenents of 12-factor applications while enabling you to use more traditional practices.
  • Stay out of the developer's way; no surprises.

What this project is not:

  • The best way to do X - Everyone's needs will be different and this is just what has been found to solve many common problems in this area. YMMV
  • Containerization - This is a JRE toolkit, not a self-contained environment to run traditional applications.
  • Application Server - While it serves the same role as many App Servers, this is not intended to be the full-blown environments you find on the market today.
  • Intended To Lock You In - This is a way to run your components your way, not ours. This project strives to let you load wrappers for POJOs and not specialized components (e.g. Servlets, EJBs).

History

This code was created sometime during Java 1.4 to support projects which needed to modularize their components and allow each of these components to be developed independently. The idea was to simply place the JAR of the component in the class path and let the loader load the component as directed from the configuration file.

The loader has been used in many different projects in the telecommunications and electric utility industries, allowing for the lifecycle control of components for applications which controlled and managed the provisioning of voice and data infrastructure and the monitoring and management for smart grid resources. It has evolved through many different incarnations until its latest form.

Capabilities

  • Configuration File Driven - No coding, just specify a file to direct the operation.
  • Component Life Cycle Management - Creation, monitoring and cleanup of components.
  • HTTP Server - Lightweight and secure message exchange for component communications.
  • Environment Variables - Environment variables override configuration for easier porting between environments.

Prerequisites:

  • JDK 1.8 or later installed
  • Ability to run bash (*nix) or batch (Windows) scripts
  • Network connection to get the dependencies (there are ways around that)
  • Assumes you do not have Gradle installed (if you do, you can replace gradlew with gradle)
Wednesday, 28 November 2018 17:17

Coyote DX

Coyote Data Exchanger is a toolkit to read data from one entity and write data to another while performing some level of transformation in between. It is more than an ETL tool and is designed to be run in any size environment. If it can run Java SE, it can run Coyote DX.

The goal has evolved into creating a data exchange tool along the line of build tools like Maven, Gradle and Ant, where a configuration file (pom.xml, build.gradle and build.xml respectively) is written and run (mvn, gradle and ant respectively). The result being data read from one system and written to another.

Using CoyoteDX, it is possible to craft an "exchange" file (e.g. newcontacts.json) and calling Coyote DX to run that exchange (e.g. cdx newcontacts.json). No coding necessary; all the components are either contained in Coyote DX or a library dropped into its path.

The primary use case involves running this "exchange" as required, most likely in cron, a scheduler, or a dedicated service running exchanges on the host. An exchange is run every 15 minutes for example. This models the well-known batch integration pattern.

A related use case involves the exchange running continually as a background process executing when some event occurs. The exchange blocks until the event is detected at which time the exchange processes the event. For example, the exchange job may wait until the new record has been added to the source system and when that record is detected, it is read in and passed through to the destination system(s). In this manner, it operates less like a batch exchange and more like a real-time exchange handling time sensitive data as it becomes available.

So far, it has been useful in integrating applications, connecting field devices to the cloud, performing load, performance, and integration testing, modeling new data exchanges, keeping systems in-synch during migrations, testing service APIs and connecting hardware prototypes to test systems. This tool is helpful in more cases than integration; is can be used to exchange data between any network connected system actor. One application involved monitoring systems, regularly polling system metrics and writing events when metrics exceeded thresholds.

New components are being added regularly and its design supports third-party contributions without recompilation. Just add your library to the path and your "exchange" file can reference the new components in that library.

Documentation

There is a project wiki which is updated with the latest information about the toolkit, how to use it, examples and best practices. It is the primary source for information on the toolkit.

Development Status

This library is currently past prototyping in initial development and well into testing. Documentation is being generated and the code is in use by integration teams uncovering issues for resolution and new use cases for toolkit expansion.

No broken builds are checked-in, so what is in this repository should build and provide value. This project is indirectly supporting integration efforts and is being tested using real-world scenarios. As new use cases are discovered the toolkit is updated to support them.

Feel free to copy whatever you find useful and please consider contributing so others may benefit as you may have.

Project Goals

This project has a simple goal: make executing data exchange jobs quick and simple.

  • Configuration file-based, for easy operation of many different tasks (i.e "exchange" file),
  • Support command-line operations with no coding (just an "exchange" file),
  • Do not require complicated frameworks or facilities (e.g. containers)
  • Enable integrations prototyping, and development operations
  • Provide utilities to assist in the reading, writing, and transformation of data,
  • Simple Configuration; if we need a GUI to configure the tools, we are not simple enough.
Friday, 30 November 2018 12:01

Integrations As A Service

Run a pool of Coyote DX workers in your data center (or cloud) to scale integration horizontally.

Friday, 30 November 2018 11:59

ITSM Integration

ServiceNow makes it easy to get data in, but getting your data out is a bit limited. Use Coyote DX to easily exchange data between ServiceNow and any system.

Page 1 of 2