Tech Radar

Show full radar

Stop

Android Things lets you build professional, mass-market products on a trusted platform, without previous knowledge of embedded system design. It reduces the large, upfront development costs and the risks inherent in getting your idea off the ground. When you’re ready to ship large quantities of devices, your costs also scale linearly and ongoing engineering and testing costs are minimized with Google-provided updates.

Android Things extends the core Android framework with additional APIs provided by the Things Support Library, which lets you integrate with new types of hardware not found on mobile devices. Developing apps for embedded devices is different from mobile in a few important ways such as:

  • More flexible access to hardware peripherals and drivers than mobile devices
  • System apps are not present to optimize startup and storage requirements Apps are launched automatically on startup to immerse your users in the app experience.
  • Devices expose only one app to users, instead of multiple like with mobile devices.

SDK

Why? Android of Things became obsolete as Google retargeting it to a small set of devices. It continues to be closed source as well and gains less traction due to this fact as well.

Why should you avoid using an event bus?

One common pattern in Android development is the use of event buses. Libraries like Otto and EventBus are often used to remove boilerplate listeners that require tunneling code through many layers of hierarchy. Although an event bus might seem convenient at first, they quickly devolve into a mess of tangled events that are incredibly hard to follow and even more difficult to debug.

Treating Producers Like Synchronous Getters

This is another common pattern that’s incredibly hard to undo in a larger codebase. Often times many activities or fragments will assume events are produced as soon as that component subscribes to the event bus. They’ll set a field based on the event, and then proceed to use that field in other lifecycle methods with the assumption that it is not null.

Alternatives

No library or tool will fix these problems for free, but some tools and patterns will encourage you to do things the right way.

An event bus, when used correctly, can probably avoid these issues. However, it also encourages these practices more than most tools. Using simple listeners, although they will require more code, will make things much easier to understand.

For more complex scenarios, RxJava provides a great solution

Google Support API

Background Almost any Android application relies on third party services such as: Geo Location, database, and storage, networking management, UI tools, and widgets.

In addition, as the Android framework evolved on every new API, it still required to support old devices that run older APIs. For this, the Android framework offers the Google Support API libraries which provide services, UI tools, and backward compatibility support.

Support Library Uses

  1. Backward compatibility for newer API For example, Android Fragments introduced in API 11 (Android 3.0), if you want to use Fragments on older APIS we can use the fragment support library.

  2. Helper Classes The Support library provides helpers classes and tools such as Fragments, the Recycler View and the Constraint Layout

  3. Testing and Debugging Tools Testing and Debugging is a fundamental principle in SW development as well in Android. The Support library offers a number of classes beyond our code include the support-annotation library and Testing Support Library that help us to test and debug our app.

Google Support API

RxJava – Reactive Extensions for the JVM

a library for composing asynchronous and event-based programs using observable sequences for the Java VM.

Why stop using it?

  • RxJava 1.X does not handle back-pressure
  • RxJava 1.X will be deprecated soon and developers will have to change to RxJava 2.X at some point… rather do it now :)
  • RxJava 2.X has been completely rewritten from scratch on top of the Reactive-Streams specification. The specification itself has evolved out of RxJava 1.x and provides a common baseline for reactive systems and libraries.

This is the time to stop using RxJava1 on new projects and thins about migrate to RxJava3

Keep

The average person spends 1 hour per day commuting, while they check their smartphone on average 125 times a day. Cars get us where are we going, while phones keep us connected. Checking your smartphone while driving is distracting, causing 25% of all accidents. Android Auto enables us to keep connected while driving and minimizes distractions. Developing for the Android Auto platform is easy and familiar to Android developers.

It is the same platform you already use for phones, tablets, watches, and more. All these experiences will often be in the same apk. Now it can also extend to the car, in a way that is safer and more efficient to the driver, so they can stay connected, with their hands on the wheel and eyes on the road.

Using Android Auto enabled apps is easy, users download the app to their phone, connect the phone to the car, the phone goes into the car mode, and casts the Auto experience to the Android Auto screen. This means even though the application is running on the phone, it is displayed on cars dash, and the driver can interact with it with car controls, voice, and touch screen.

In the addition to that, if a specific Android Auto device and car support the link, API is available for CAN bus, a protocol that car diagnostic tools used to diagnose the car and get data from car sensors, and optimize its performance. It is available to the Android Auto OEMs and car manufacturers via its C/C++ API, due to the extremely sensitive nature of the API.

Why? In recent years Android Auto gained much attention on the car aftermarket, allowing 3rd party vendors and app makers to enrich the infotainment experience. With the introduction of 5G always ON this trend will remain and require more developers to adapt their apps to Android Auto world

A set of Kotlin extensions for Android app development. The goal of Android KTX is to make Android development with Kotlin more concise, pleasant, and idiomatic by leveraging the features of the language such as extension functions/properties, lambdas, named parameters, and parameter defaults. It is an explicit goal of this project to not add any new features to the existing Android APIs.

Android KTX saves a lot of boilerplate code, making development speed and productivity increase.

Android KTX is in Preview - Bugs and Stability issues may occur

Android P Features and API Overview

It offers the following :

  • new RTT WiFi and location tracking. The app must have Location permission and WiFi scanning on.
  • Smart notifications improvements such as smart reply inline from the notification bar, adding images to the reply! Saving the reply as drafts.
  • Multi-Camera Support - access 2 camera devices at the same time

Artifacts within the androidx namespace comprise the Android Jetpack libraries. Like the Support Library, libraries in the android namespace ship separately from the Android platform and provide backward compatibility across Android releases.

AndroidX is a major improvement to the original Android Support Library, which is no longer maintained. androidx packages fully replace the Support Library by providing feature parity and new libraries.

In addition, AndroidX includes the following features:

  • All packages in AndroidX live in a consistent namespace starting with the string androidx. The Support Library packages have been mapped into the corresponding androidx.* packages. For a full mapping of all the old classes and build artifacts to the new ones, see the Package Refactoring page.

  • Unlike the Support Library, androidx packages are separately maintained and updated. The androidx packages use strict Semantic Versioning, starting with version 1.0.0. You can update AndroidX libraries in your project independently.

  • Version 28.0.0 is the last release of the Support Library. There will be no more android. support library releases. All new feature development will be in the androidx namespace.

We think it is important to keep the project up to date with androidx in order to solve project dependencies since the support lib is no longer maintained.

ExoPlayer is an application-level media player for Android. It provides an alternative to Android’s MediaPlayer API for playing audio and video both locally and over the Internet. ExoPlayer supports features not currently supported by Android’s MediaPlayer API, including DASH and SmoothStreaming adaptive playbacks. Unlike the MediaPlayer API, ExoPlayer is easy to customize and extend and can be updated through Play Store application updates.

Why?

  • Relying on built-in multimedia players is risky because of incompatibilities and lack of features.

  • The technology is supported by Google themselves, e.g. YouTube app.

Firebase Platform

Firebase is a powerful platform for Mobile Android, iOS and WebApps.

Firebase gives you functionality like analytics, databases, messaging and crash reporting so you can move quickly and focus on your users. Firebase is built on Google infrastructure and scales automatically.

Firebase gives you many services such as: analytics, Auth, Realtime database, Cloud Messaging, Storage, Crash reporting and more.

Using One Firebase Console to control the services and data is provided. It is easy to use and set up by using Firebase SDK libraries. Usually, in order to set up most of the services, there is a wide range of documentation and code examples.

What is new in firebase: Cloud Firestore Cloud Functions Crashlytics

Gradle is a build tool. Following a build-by-convention approach, Gradle allows for declaratively modeling your problem domain using a powerful and expressive domain-specific language (DSL) implemented in Groovy instead of XML. Because Gradle is a JVM native, it allows you to write custom logic in the language you’re most comfortable with, be it Java or Groovy. it too provides powerful dependency management.

Although still many more developers use Maven (~60%) Gradle is a powerful contender/ally which needs to be considered when making the decision on which build tool to use.

The Android Studio build system is based on Gradle, and the Android Gradle plugin adds several features that are specific to building Android apps.

Robust, Age Proven, and Renewing The Java language although being with us already for a long time is renewing itself. Starting from Java 9 we now see a new version of Java every 6 months. This means more features much faster. I still believe that using Java gives a very strict and obvious structure to the program which is sometimes lacking in other programming languages.

Framework Support The rise if Reactive Frameworks (Vert.x, Spring Reactor and others) and Micro frameworks (Spark Java, javalite, ligth4k) make java very versatile in its use and use cases. With the above-mentioned frameworks, we can now easily program reactive systems and microservices with a minimal footprint. Java is always going with the Trends (although sometimes a bit behind) as we can see with Docker support in the JVM just recently.

The Java community is a huge community one which cannot be neglected.

We recommend using this language as it has one of the largest eco-systems of any language, a vibrant community and it is actively maintained. It is very versatile, easy to use and learn and fits many applications.

Kotlin is a statically typed programming language that runs on JVM (Java Virtual Machine). Development leads Andrey Breslav has said that Kotlin is designed to be an industrial-strength object-oriented language, and a “better language” than Java, but still be fully interoperable with Java code, allowing companies to make a gradual migration from Java to Kotlin.

At Google I/O 2017, Google announced first-class support for Kotlin on Android. As of Android Studio 3.0 (October 2017), Kotlin is a fully supported programming language by Google on the Android Operating System and is directly included in the Android Studio 3.0 IDE package as an alternative to the standard Java compiler.

Kotlin is now an official language on Android. It’s expressive, concise, and powerful. Best of all, it’s interoperable with our existing Android languages and runtime.

Besides the stylistic advantages over Java, Kotlin has new and exciting features that are offering performance and stability improvements over Java code, such as Kotlin coroutines, Rx-like collection iteration modifiers, null-safety, operator overloading, etc.

Flutter is Google’s cross-platform framework while React Native is also a cross-platform framework developed by Facebook.

Although React Native exists since 2015, Flutter was announced in May 2017 and officially released in December 2018 but already reached high popularity and starting to be a true alternative for React Native.

Flutter is an open-source project, developed and supported by Google and based on Dart programming language. Dart compiles directly to platform native code and is an OO reactive event-driven modern language which makes it great for cross-platform programming.

React Native is based on the javascript programming language.

As it seems that Google designed Flutter to be the best mobile app development framework, while pushing Dart to be Flutter friendly programming language, It seems that the community is quickly adopting Flutter as an alternative cross-platform framework and there are already thousands of apps in production both for Android and IOS. On top of that, Flutter for Web was recently announced.

The question of which framework should be used is becomes more and more interesting while there are still pros and cons for each framework.

React Native - building native apps with React.

React Native is the first and most popular paradigm for building native apps with JS. these are not the weak PhoneGap like web apps but real native applications, with real native-like performance.

RN is not dependent on any vendor in order to support new native features, it is in sync with the native OS components and is partly what makes it is so strong.

It is also relatively easy to pick up, especially if you worked with React Web before.

it has a vivid community with many users and contributors.

this technology really bridges frontend and mobile in the best way I know of to date.

lastly, RN is used in React VR/AR.

Reactive Programming is a paradigm that changes the direction of the flow. Instead of creating threads and blocking for communications, we create pipelines where data is pass through and each layer can react to the data received. The indirection changes the code flow from blocked to async.

Over time the reactive extensions have become a standard for adding higher level functionality on the stream of data while being agnostic to the data itself. The extensions are based on the observable pattern and allow for cancellation of the data process in stream

Why? Reactive programming has been around for a while but has only started to hit the mainstream. Most popular frameworks like vertx and spring have support for reactive programming. For those looking for performance, reactive programming is a must.

RxJava and RxKotlin (Reactive Extensions) is one of the most popular libraries which brings Android development the advantages of the asynchronous and event-based programming.

RxJava is written on the top of the Reactive Stream specification and utilized 3 fundamental building blocks:

  • The Observer Pattern
  • The Iterator Pattern
  • Functional Programming

In addition to the above, RxJava provides Schedulers objects which mange the multithreading tasks and allows easy data sharing across threads without the need to use any other Android framework tools such as Handlers or AsyncTasks.

RxJava2 however, is the enhanced version of the first RxJava written from scratch and introduces back pressure processing.

With RxJava2 our code becomes much more, Scalable and Maintainable, Event-Driven, Error Resilient and Readable.

The advantages of Event-Driven, Reactive-Stream code has been proven already. Therefore, we should strive to use RxJava2 as one of the fundamental parts of our program.

The full RxJava2 Javadoc RxJave2 homepage

RxJava

Machine learning on mobile devices is offering an array of new and unique user experiences for problems that are close to data people are working on. Experiences that were impossible before, like OCR, image recognition, machine translation, speech recognition, etc, were in the realm of Science Fiction, but they are being brought into our daily lives. Thanks to machine learning on mobile and IoT devices, universal translators from Star Trek, Babel Fish from Hitchhiker’s Guide to the Galaxy, “neural net processor, a learning computer” from Terminator 2, and more, are not a figments of writers’ imagination, but actual products that you can buy, or are in development stages.

TensorFlow is an opensource library for dataflow programming, often used in deep learning. It was developed for internal use by the Google Brain team. It was released to the public, and in 2017 mobile/lite version was released that is targeting mobile and embedded/IoT platforms, such as Android, iOS, Rasberry Pi, etc.

Why we should keep using it? Prediction in mobile devices is already incorporated in most of the mobile devices. Having a lite version in TensorFlow gives another important tool for such causes.

Start

AI-driven development focuses on tools and techniques for embedding AI into applications and using AI to generate other AI-powered tools. This trend is evolving in two aspects:

  1. Enterprises prefer the tools that target professional developers instead of data scientists. While data scientists have to build AI infrastructure, AI framework and its platform, professional developers just have to infuse AI-powered capabilities into an application without the help of data scientist.
  2. Ready-to-use AI tools are used to create AI-powered solutions which will enable companies to increase their productivity faster, reduce costs, and improve relationships with customers. This approach is empowering businesses by automating tasks related to the development of AI-powered solutions.

Technologies that are assisting developers in speeding up the development process are:

  • Augmented analytics
  • Automation testing
  • Automated code generation
  • Automated solution development

Why? Developers that are using predefined models delivered as a service will be in high demand in the market. It enables more developers to utilize the services, hence increasing their efficiency. An example of such a service: tabnine. Software houses will require developers to get familiar with AI-empowered services and techniques in order to minimize time-to-market and development costs.

Android 10 improves on the well-established Android experience with new innovations:

Networking has abilities for 5G networks and Wi-Fi performance modes. Handling configuration changes in foldable devices. Smart Reply in notifications, and sharing shortcuts. Dark Theme to save battery. Gesture navigation replacing hardware buttons. Improvements to Privacy and Security. Audio has improved playback capture, new audio and video codecs, native MIDI API, and directional, zoomable microphones. Graphics has added dynamic depth for photos and Vulkan. Settings Panels; Neural Networks API 1.2; Thermal API.

Why? These technologies have already been released on real devices.

Google Wear OS introduced on March 2014 previously known as Android Wear, is Google’s Android version for smartwatches and other wearables.

With Wear OS and embedded Google Assistance, James Bond world has never been so realistic. Check the weather or search for a restaurant by speaking to your watch. Get update with your next meetings and directions, stay connections and even pay using your watch. The future is here! and hey, Wear OS can easily connect with both Android or IPhone.

Wear OS brings new vision to the people that love to be on the edge of what the mobile technology has to offer. If you working out, listening to your music or on your business trip, Google Google Wear OS is the little thing that will take your activity to the next level. And it works both with Android and IPhone.

Check out the Google Wear Homepage

Chaos Engineering is becoming a discipline in designing distributed systems in order to address the uncertainty of distributed systems at scale.

Chaos Engineering can be thought of as the facilitation of experiments to uncover systemic weaknesses.

These experiments follow four steps:

  1. Start by defining ‘steady state’ as some measurable output of a system that indicates normal behavior.
  2. Hypothesize that this steady-state will continue in both the control group and the experimental group.
  3. Introduce variables that reflect real-world events like servers that crash, hard drives that malfunction, network connections that fail, etc.
  4. Try to disprove the hypothesis by looking for a difference in steady-state between the control group and the experimental group.

In essence -> The harder it is to disrupt the steady-state, the more confidence we have in the behavior of the system. And If a weakness is uncovered, we now have a target for improvement before that behavior manifests in the system at large.

Chaos Engineering was called as such mainly through Netflix’s Chaos Monkey

Read More @ https://principlesofchaos.org/

Dart Dart is Google’s programming language that is optimized for the client-side. Many rich frameworks such as Flutter are using Dart as their development language.

Approach Dart is a light weight, object oriented language that utilises modern programming paradigms such as reactive functional programming. Dart syntax is very familiar to C#, C++, Java and Kotlin developers. Furthermore, Dart code is highly expressive

Why? Dart compiles to ARM and x86 code, apps that developed in Dart can run natively on Android and iOS

Flutter

Flutter is the new Google’s framework for creating high quality apps both for Android and iOS with a single codebase.

The framework is based on Dart programming language which eventually generates an Android’s APK and iOS IPA packages.

Flutter first introduced in May 2017, and currently still in beta but so far, its already has a large community around the world.

Compared to other cross platform frameworks such as ReactNative which usually relays on the platform native UI widgets, Flutter internally renders the UI widgets and draw them directly on the screen canvas. This makes flutter much faster and more reliable when it comes to backwards support.

As said above, Flutter has large community already, it has highly rich documentation and has Google’s Support.

For more details checkout about Flutter in their home page

State management is one of the most important topics in Flutter. In Flutter, there are already a few state management techniques that help us to develop our app. The most popular State Managements patterns are:

  • setState
  • BLoC - Business Logic Component
  • StreamBuilder
  • Redux

Each pattern has its benefits and cons.

As Flutter app based on Widgets which are structured in the app widget tree, we always have to consider where each widget is located in the tree (The context) and whether it should be StateFul or Stateless (The building blocks of flutter). In flutter, UI = f(State) and therefore, when we managing the app state we actually managing the UI, therefore before starting developing an app, we should select the right StateManagement pattern. Therefore, we should start using StateManagement in Flutter apps.

Android 10 (API level 29) adds more support for foldable devices and different folding patterns.

Being able to run multiple windows is one of the benefits of large screens. In the past, having two apps side by side was common in some devices. The technology has improved to the point where three or more apps can run on the screen at the same time, and also share content between themselves.

In the future, you might see foldable phones that support more than one screen or display at a time. Handling this configuration is similar to how developers work with projected screens today on Chrome OS.

Why We need to start supporting foldable because there are real devices that have these form factors.

Google Virtual Reality Use the Google VR SDK to build apps for Daydream and Cardboard. The SDK provides native APIs for key VR features like user input, controller support, and rendering, which you can use to build new VR experiences.

Why

  • VR enhances the user experience, but usually requires peripherals for immersive interactions.

  • The technology is supported by Google themselves.

  • Manu games, multimedia apps are using VR to render their 3D environments.

Coroutines are a Kotlin feature that converts async callbacks for long-running tasks, such as a database or network access, into sequential code. The Kotlin team defines coroutines as “lightweight threads”. They are sort of tasks that the actual threads can execute.

Why? If you use Kotlin then you need to understand this threading model It is simpler in terms of design and more efficient in terms of CPU and memory usage. It is the standard threading model used in the language.

Android, IOS/OSx or web developers!

Have you ever thought animation can be so easy? Do you want to boost your web page or mobile app with cool animation which takes your product to the highest standards of user experience? Now, with Lottie by Airbnb, animation has never been so easy and cool!

Lottie is a library developed by Airbnb for Android, iOS/macOS, Web and even ReactNative apps that parses Adobe After Effects animations exported as JSON with Bodymovin and renders them natively on mobile and web include. For the first time, designers can create and ship beautiful animations without an engineer painstakingly recreating it by hand. They say a picture is worth 1,000 words so here are 13,000

Checkout the Lottie documentation

It looks like that Lottie is the next thing in animation frameworks in the mobile and Web world. As for Android, so far we usually used native animation frameworks such as Object animators or Value animators API which defined in code or in Android Resource XML, it seems that Lottie will do to animation what Retrofit did to the Http request frameworks.

So far, Lottie is already used in many apps such as GoogleHome, Target, and Uber

ML Kit for Firebase ML Kit is a mobile SDK that brings Google’s machine learning expertise to Android and iOS apps in a powerful yet easy-to-use package. Whether you’re new or experienced in machine learning, you can implement the functionality you need in just a few lines of code. There’s no need to have deep knowledge of neural networks or model optimization to get started. On the other hand, if you are an experienced ML developer, ML Kit provides convenient APIs that help you use your custom TensorFlow Lite models in your mobile apps.

Production-ready for common use cases ML Kit comes with a set of ready-to-use APIs for common mobile use cases: recognizing text, detecting faces, identifying landmarks, scanning barcodes, and labeling images. Simply pass in data to the ML Kit library and it gives you the information you need.

On-device or in the cloud ML Kit’s selection of APIs run on-device or in the cloud. Our on-device APIs can process your data quickly and work even when there’s no network connection. Our cloud-based APIs, on the other hand, leverage the power of Google Cloud Platform’s machine learning technology to give you an even higher level of accuracy.

Why

  • ML used correctly enhances the user experience. It makes your app seem more intelligent and personalized.

  • The technology is supported by Google themselves.

Navigation refers to the interactions that allow users to navigate across, into, and back out from the different pieces of content within your app. Android Jetpack’s Navigation component helps you implement navigation, from simple button clicks to more complex patterns, such as app bars and the navigation drawer. The Navigation component also ensures a consistent and predictable user experience by adhering to an established set of principles.

Why? The Navigation Component follows Google’s recommendation for a single activity apps, that navigate between fragments. See Google’s videos related to the motivation:

  1. https://www.youtube.com/watch?v=9O1D_Ytk0xg
  2. https://www.youtube.com/watch?v=2k8x8V77CrU

Try

Android Automotive is a variation of Google’s Android operating system, tailored for its use in vehicle dashboards. Introduced in March 2017, the platform was developed by Google and Intel, together with car manufacturers such as Volvo and Audi. The project aims to provide an operating system codebase for vehicle manufacturers to develop their own versions of the operating system. Besides infotainment tasks, such as messaging, navigation and music playback, the operating system aims to handle vehicle-specific functions such as controlling the air conditioning.

Contrarily to Android Auto, Android Automotive is a full operating system running on the vehicle’s device, not relying on an external device to operate

We think it is time to learn and try building applications for Android Automotive

The Jetpack Benchmark library allows you to quickly benchmark your Kotlin-based or Java-based code from within Android Studio. The library handles warmup, measures your code performance, and outputs benchmarking results to the Android Studio console.

Use cases include scrolling a RecyclerView, inflating a non-trivial View hierarchy, and performing database queries.

The Jetpack Benchmark library is still in RC and this is a good time to try and integrate small code blocks in your app.

One method of protecting sensitive information or premium content within your app is to request biometric authentication, such as using face recognition or fingerprint recognition.

Even though fingerprint authentication has been in Android for several years already, we need to add newer technologies to both improve sign-in and allow other options for user verification. Users expect more control over their data in terms of privacy and security.

CameraX is a Jetpack support library, built to help you make camera app development easier. It provides a consistent and easy-to-use API surface that works across most Android devices, with backward-compatibility to Android 5.0 (API level 21)

While it leverages the capabilities of camera2, it uses a simpler, uses a case-based approach that is lifecycle-aware. It also resolves device compatibility issues for you so that you don’t have to include device-specific code in your codebase. These features reduce the amount of code you need to write when adding camera capabilities to your app

Lastly, CameraX enables developers to leverage the same camera experiences and features that preinstalled camera apps provide, with as little as two lines of code. CameraX Extensions are optional add-ons that enable you to add effects like Portrait, HDR, Night, and Beauty within your application on supported devices.

The CameraX library is in the alpha stage, as its API surfaces aren’t yet finalized. That’s why we think is the time to try and play with it and get compatible with the future releases

Modules provide a container for your app’s source code, resource files, and app-level settings, such as the module-level build file and Android manifest file. Each module can be independently built, tested, and debugged.

Android Studio uses modules to make it easy to add new devices to your project. By following a few simple steps in Android Studio, you can create a module to contain code that’s specific to a device type, such as Wear OS or Android TV. Android Studio automatically creates module directories, such as source and resource directories, and default build.gradle file appropriate for the device type. Also, Android Studio creates device modules with recommended build configurations, such as using the Leanback library for Android TV modules.

Why should I Modularize my app?

  • Faster build times.
  • Fine-grained dependency control.
  • Improve reusability across other apps.
  • Improves the ownership & the quality of the codebase.
  • Stricter boundaries when compared to packages.
  • Encourages Open Source of the newly created libraries.
  • Makes Instant Apps & Dynamic Features possible (improving discoverability).

We think it is a must in any Android project

Flutter Provider Package: A mixture of dependency injection (DI) and state management, built with widgets for widgets.

It purposefully uses widgets for DI/state management instead of dart-only classes like Stream. The reason is, widgets are very simple yet robust and scalable.

By using widgets for state management, the provider can guarantee:

maintainability, through a forced uni-directional data-flow testability/composability, since it is always possible to mock/override a value robustness, as it is harder to forget to handle the update scenario of a model/widget

As the Provider package becomes one of the most popular packages in Flutter which also adopted by google - it certainly should be tried in Flutter apps.

ARCore is Google’s platform for building augmented reality experiences. Using different APIs, ARCore enables your phone to sense its environment, understand the world and interact with information. Some of the APIs are available across Android and iOS to enable shared AR experiences.

ARCore uses three key capabilities to integrate virtual content with the real world as seen through your phone’s camera:

  • Motion tracking allows the phone to understand and track its position relative to the world.
  • Environmental understanding allows the phone to detect the size and location of all types of surfaces: horizontal, vertical and angled surfaces like the ground, a coffee table or walls.
  • Light estimation allows the phone to estimate the environment’s current lighting conditions.

Why

  • AR enhances the user experience.

  • The technology is supported by Google themselves.

  • Not every device supports ARCore.

  • Need to install Google AR services.

Did a course on Udemy about dialog flow. Created a chatbot in facebook using dialog flow (a part of the course), in 2 phases:

  1. Dialog flow + facebook only (direct)
  2. Dialog flow + NodeJS + facebook (middle-ware). Started a course on google assistant. Next Item: create a small project by incorporating google assistant, dialog flow and some server-side (serverless? NodeJS?). Possible problems:
    • Google’s assistant is locale-specific. We need to find a way to install it on mobile.

Why? Google assistant is leading the competition when compared to Cortana (Microsoft) , Siri (Apple), and Alexa(Amazon). We already did some important research on the subject with presentations, workshops and even a fuze day on Tikal. Keeping tabs on it and playing with it further (for example - google sign in, oauth, surface switching, sentiment analysis, etc…) will give us a good insight into the Converstational UI as a whole.

Google Play Instant enables native apps and games to launch on devices running Android 5.0 (API level 21) without being installed. You can build these types of experiences, called instant apps and instant games, using Android Studio. By allowing users to run an instant app or instant game, known as providing an instant experience, you improve your app or game’s discovery, which helps drive more installations and acquire more active users.

Google Play Instant provides rich, native experiences at the tap of a web link. People can experience your app without upfront installation, enabling a higher level and quality of engagement. To make an instant app load as quickly as a typical mobile webpage does, though, you need to create a well-structured, efficient instant app. The smaller your instant app’s binary, the faster it loads and the smoother the user experience is.

We categorize it as try for our opinion it a new technology we think is a good time for use if it fits your app.

A pragmatic lightweight dependency injection framework for Kotlin developers.

Written in pure Kotlin, using functional resolution only: no proxy, no code generation, no reflection.

RxDart is a reactive functional programming library for Google Dart, based on ReactiveX. Google Dart comes with a very decent Streams API out-of-the-box; rather than attempting to provide an alternative to this API, RxDart adds functionality on top of it.

Dart and Flutter already have decent reactive functionalities like the Stream API’s which is quite similar to RxDart has to offer. However, RxDart API can take the stream API further usage of BehaviorSubject or other Rx operators such as Combine and Zip, and also managing the stream life cycle better. This is why it is extremely worth to try.

The Security library, part of Android Jetpack, provides an implementation of the security best practices related to reading and writing data at rest, as well as key creation and verification.

The library uses the builder pattern to provide safe default settings for the following security levels:

  • Strong security that balances great encryption and good performance. This level of security is appropriate for consumer apps, such as banking and chat apps, as well as enterprise apps that perform certificate revocation checking.
  • Maximum security. This level of security is appropriate for apps that require a hardware-backed key store and user presence for providing key access.

This library is currently available as an alpha library, This why it is still on try

Stop

AngukarJs has been a very popular framework, about 8 years ago before HTML5 standards were widely embraced by all browsers.

AngularJs was about MVC and only in its last versions has shifted to support components.

On large scale applications, its change detection mechanism, based on scopes that prototypically inherited one another, was indeterministic and their DOM manipulation was built upon jQlite, a subset of jQuery, that because of browser support for HTML5 APIs has become a burden instead of being necessary.

Nowadays it is hard to hire angularJs developers because the market had shifted from it.

The Angular time has decided to rewrite the entire framework, and Angular is a different framework than its ancestor only with a similar name.

Therefore, we recommend stopping using it.

Why stop using it

Keep maintaining applications written in angular.js, although it is safer than rewriting, it can cause you difficulties in hiring. Our suggestion is to keep the application running but developing new features in a newer framework/library using micro frontends, so the migration process will be as smooth as possible. However, the migration process is something you must do, if you wish to keep your angular.js written application alive.

Flow is a static type checker for your JavaScript code developed and promoted by Facebook as an alternative to TypeScript. Flow uses static type annotations. This typing can also be done by flow implicitly by analyzing the code. For example.

It is an alternative to Typescript, however, as Typescript is gaining more traction and is becoming the defacto standard for the need for Flow is diminished. This is why we don’t encourage using Flow for type safety.

End Of Life:

For new projects, the polymer’s team recommends working with LitElement. LitElement is a smaller, lighter successor to the Polymer library.

For existing Polymer apps, try upgrading to version 3.0 of the Polymer library.

Keep

Accessibility refers to the design of products, devices, services, or environments for people with disabilities. The concept of accessible design ensures both “direct access” (i.e., unassisted) and “indirect access” meaning compatibility with a person’s assertive technology (for example, computer screen readers).

ESModules has become widely supported among modern browsers. It is also supported in Nodejs (experimental flag in v12 and default support in v13).

** Why start using it?** A common standard for importing modules is a long time required feature in JS and the split between common-JS import style (require) and other forms is finally here. This means that at last, we can easily share code seamlessly between client and server (NodeJS).

The Angular framework, less than two years after it’s the first release, has gained a strong community and a large number of users.

From it’s v2 to v8, angular has proven relatively easy migration path along with frequent major releases which means that it is here to stay.

v9, the version about to come, will introduce IVY for the first time, IVY is a new rendering mechanism that is way more efficient. Also, it may lose it’s heavy boilerplate and be much simpler for the beginning developer.

Angular still needs a steep learning curve. New developers need to learn a substantial amount of “building blocks”. It might change on the next versions, however, angular code is still written using its version-2 paradigms which include modules, services, pipes, components and more.

Why keeping it?

Angular, although less popular than react, has proven stability, and that it is here to stay. V9 may be a significant change and it may gain more popularity after it comes out, especially if its learning curve issue will change.

A repository that contains multiple packages or projects. Those projects can, but don’t have to be related. Most famous monorepo pioneers are Google, Facebook and Twitter.

Monorepo enables development teams to work on a large-scale application with multiple modules in one repository, with separation of build/test processes between modules.

Front-end developers use lerna or yarn with workspaces enabled. Lerna is a cli tool to manage monorepo projects by running bulk tasks in parallel or sequence.

CSS modules are styles scoped locally to a component.

When creating a component, you may import style from a CSS file, using the import statement, and all CSS rules will be scoped to the component.

The CSS module compiler changes class names to have a unique name that will match only the selector. It works almost the same way as emulated view encapsulation in angular, however, it has many implementations so it is framework agnostic.

** Why stopping it?** It inflates bundles, it is very hard for debugging, and on top of all, there are better alternatives.

** Why keeping it?**

CSS modules answer the need to avoid name collisions in CSS classes. For now, it seems like a good approach, if you don’t necessarily want to use Shadow-DOM for encapsulation.

CSS variables are entities defined by CSS authors that contain specific values to be reused throughout a document. They are set using custom property notation (e.g., --main-color: black;) and are accessed using the var() function (e.g., color: var(--main-color);).

Complex websites have very large amounts of CSS, often with a lot of repeated values. For example, the same color might be used in hundreds of different places, requiring global search and replace if that color needs to change. CSS variables allow a value to be stored in one place, then referenced in multiple other places. An additional benefit is semantic identifiers. For example, --main-text-color is easier to understand than #00ff00, especially if this same color is also used in other contexts.

CSS variables are supported in all currently used browsers except IE and Opera Mini. https://caniuse.com/#feat=css-variables

Why

CSS variables provide natively supported API for binding data into styling, which is supported by all major browsers. It helps you overcome site complexity with less code involved and share styles without the use of preprocessors.

Functional Programming is a programming paradigm that becomes more popular in recent years, it has many benefits for even-driven architectures and other state/contexts handling/processing problems.

Why? Object oriented has dominated the market for a lot of years. As we move into the era of big data, we have the need for code that can be distributed on multiple processors or even machines. For this, we need a paradigm that limits the side effect, and this can be done beautifully with functional programming.

Jest has become a de-facto standard for testing in react and it is more and more adopted in other frameworks.

In relation to code structure and syntax, it is not different from jasmine.

This testing platform, we think, is going to serve us for years to come.

Flutter is Google’s cross-platform framework while React Native is also a cross-platform framework developed by Facebook.

Although React Native exists since 2015, Flutter was announced in May 2017 and officially released in December 2018 but already reached high popularity and starting to be a true alternative for React Native.

Flutter is an open-source project, developed and supported by Google and based on Dart programming language. Dart compiles directly to platform native code and is an OO reactive event-driven modern language which makes it great for cross-platform programming.

React Native is based on the javascript programming language.

As it seems that Google designed Flutter to be the best mobile app development framework, while pushing Dart to be Flutter friendly programming language, It seems that the community is quickly adopting Flutter as an alternative cross-platform framework and there are already thousands of apps in production both for Android and IOS. On top of that, Flutter for Web was recently announced.

The question of which framework should be used is becomes more and more interesting while there are still pros and cons for each framework.

What is it? Prettier is an opinionated code formatter. It enforces a consistent style by parsing your code and re-printing it with its own rules.

Why does it matter? Having a code “style guide” is very valuable for a project, it minimizes friction between developers that can arise from different coding styles and it helps to focus on the code instead of focusing on formatting.

How does it work? Maintaining a codebase that is readable for everyone is not an easy task without automating the formatting process. Prettier is an automatic code formatter that comes preconfigured with “best practice” and standard rules that can be overridden via a configuration file.

When to use it? In any project! developed by the people that built React and react-Native and used by a lot of major projects - Prettier is a must have in any project.

React is old news, but still, it keeps changing and improving. React 16 has many improvements and features which worth to track.

Among the changes are some long-standing feature requests, including fragments, error boundaries, portals, support for custom DOM attributes, improved server-side rendering, and reduced file size.

React Native - building native apps with React.

React Native is the first and most popular paradigm for building native apps with JS. these are not the weak PhoneGap like web apps but real native applications, with real native-like performance.

RN is not dependent on any vendor in order to support new native features, it is in sync with the native OS components and is partly what makes it is so strong.

It is also relatively easy to pick up, especially if you worked with React Web before.

it has a vivid community with many users and contributors.

this technology really bridges frontend and mobile in the best way I know of to date.

lastly, RN is used in React VR/AR.

Reactive Programming is a paradigm that changes the direction of the flow. Instead of creating threads and blocking for communications, we create pipelines where data is pass through and each layer can react to the data received. The indirection changes the code flow from blocked to async.

Over time the reactive extensions have become a standard for adding higher level functionality on the stream of data while being agnostic to the data itself. The extensions are based on the observable pattern and allow for cancellation of the data process in stream

Why? Reactive programming has been around for a while but has only started to hit the mainstream. Most popular frameworks like vertx and spring have support for reactive programming. For those looking for performance, reactive programming is a must.

RxJS is an implementation of ReactiveX in Javascript, a library for reactive programming using Observables, to make it easier to compose asynchronous or callback-based code.

ReactiveX is more than an API, it’s an idea and a breakthrough in programming. It has inspired several other APIs, frameworks, and even programming languages.

It has the following principles:

Functional Avoid intricate stateful programs, using clean input/output functions over observable streams.

Less is more ReactiveX’s operators often reduce what was once an elaborate challenge into a few lines of code.

Async error handling Traditional try/catch is powerless for errors in asynchronous computations, but ReactiveX is equipped with proper mechanisms for handling errors.

Concurrency made easy Observables and Schedulers in ReactiveX allow the programmer to abstract away low-level threading, synchronization, and concurrency issues.

Why? We want to keep RXjs since its the de-facto most popular library for implementing reactive programming in Javascript and it is being used by Angular inferentially.

Sass and Less are dynamic preprocessor style sheet languages that compile to CSS.

Why keep using it?

Although with the advancement of CSS itself (CSS3, CSS variables, etc.), more developers are returning to write ‘pure’ CSS, the use of one of both is still very wide-spread in the frontend domain and allows us to write CSS in a modern way while targeting older browsers.

Selenium has become a default e2e testing mechanism, nevertheless, it requires many setups for web drivers which can become a hassle especially when running automation on top of build machines, some organizations refuse to install browsers on their build machines.

##Why keeping it?

Despite its disabilities, selenium is still very popular among our customers, its substitutes don’t seem to gain much popularity, this situation may won’t last for long, but this is the current one.

SEO (Search Engine Optimization) is relevant for websites rather than web apps and helps in driving organic (not from ads) traffic from search engines. No matter how the website is cool and beautifully designed may be, if the website does not rank well in SERP (Search Engine Results Page), it will likely go unnoticed on the web.

The front-end developer must ensure that the website is coded to in a search-engine friendly fashion: – For best SEO performance preferred SSR over SPA – Use semantic HTML coding and use micro formatting attributes
– Care about website performance – Use linking between pages – Create Friendly URLs and use the canonical link tag – Use sitemap.xml, robots.txt to control what should be scanned by SE

Snapshot testing is a feature of the Jest testing framework (Since v. 14.0, 2016) for asserting declarative code (produces simple views). While running the snapshot test, jest produces a serializable (rendered) value for the react tree. In this way, snapshot testing allows one to:

  1. See the expected rendered outcome, including testing multiple versions of the expected outcome (error msgs for examp.)
  2. Test changes in the code and make sure that any change in the outcome is intentional
  3. run a ‘semi-integration’ tests (a couple of components)

Storybook has become an essential tool in the component development toolbox. It allows for the development and showcasing of components easily with minimal environment configuration. Many libraries use storybook to create component catalogs.

Storybook is easily pluggable and there are very useful plugins.

A reference storybook by AirBnB

Styled-Components is a library that easily allows developers to take advantage of the best of all worlds of modern CSS styling with a minimal setup effort. It enables CSS modules to default to avoid style collision. It enables complex style hierarchies, functions, and variables without setting up complex Webpack configuration. We think this is a very efficient and powerful solution for the modern web app’s styling.

Typescript has become popular once again since the Angular team has chosen it as their main programming language. Many TS standards were embraced in ES2015-ES2018 and it has become heavily used in almost every javascript framework.

For larger projects, where you work as a team, we recommend preferring TypeScript on the backend over JavaScript. The reasons are:

  1. You allow the IDE to acknowledge errors in the use of classes and functions that would only be perceived at runtime.
  2. When we define types, the IDE is able to relate objects and functions to the files that gave origin to them.
  3. Catching errors before you even run your app
  4. Reduce bugs by 15%

Typescript is already been used by many projects and customers as the implementation language on the backend, especially on the FaaS (i.e. AWS-Lambda) world.

Vue.js is a javascript framework that has been around since 2014 but is becoming more and more popular in the js/ frontend community and considered (today) the third most used framework (after react and angular). Vue.js is designed in such a way that it can be incrementally adoptable and scale between a library and a framework depending on different use cases - from a view layer only in part of the application up to a full-blown ecosystem for complex Single Page Applications.

Why should we keep it?

Vue.js gains popularity and a large community that works with it. It has become one of the major frameworks/ libraries and it seems that it is here to stay.

Web Components

Web Components are the missing piece in UI development. It enables to write portable pieces of UI and cost zero effort when the next shiny framework arrives. It is compatible with all current trending libraries (i.e. React/Angular/AngularJS) and has a growing community and interest.

Why use it?

Web components are natively supported by all modern browsers (and polyfills available for legacy browsers) and are fully reusable across frameworks. The web components web standard is used by major companies: GitHub, Adobe, Microsoft, Spotify, Google, IBM, Apple and many more.

The creation of core components using the web-component standard provides future compatibility for possible changes in the javascript ecosystem.

Implementation of user-interface controls and widgets using the web component standard provides high-performant, portable and fully reusable components.

There is plenty of community support, libraries, tools and free resources available.

Why keep it?

The web components standard is part of the w3c specifications and is already implemented in major browsers, including mobile.

In addition, issues like framework migration, micro-frontends, polyglot development, etc. become less painful when web components are part of the project’s code.

Webpack (currently in v. 4.6) is is a module bundler. It analyzes a cluster of interdependent modules (mainly js but also other assets) and produces bundles of static items.

Webpack is highly configurable, (which sometimes considered being one of its downsides) and has a very rich plugin ecosystem.

In the current version, Webpack introduced

Webpack is de facto a standard for frontend web development in general and builds in particular.

Websocket is the most efficient way for duplex communication between server and client. There are many libraries. implementing WebSocket communication and enable fallback to other forms (SSE, long-polling) when it is not supported.

Why keeping it?

WebSocket is still, the most efficient way to communicate fast with the server, and have the server push-notify the browser/client. It is used behind-the-scene in every live reload development server. We recommend using it through the libraries wrapping it so fallbacks are enabled.

WebWorker provides a mechanism to run scripts outside the main scope. using it can improve performance in highly scaled applications.

Along with postMessage, webworker can communicate with the main scope, also by shared memory, which is risky but may also speeds up your application.

Why? WebWorker is there for several years now, and it is widely used with many frameworks. Heaving heavy operations conducted in Webworker improves performance

Start

The Apollo Platform is a family of technologies you can incrementally add to your stack: Apollo Client to connect data to your UI, Apollo Engine for infrastructure and tooling, and Apollo Server to translate your REST API and backends into a GraphQL schema.

  • Apollo Client Bind data to your UI with the ultra-flexible, community-driven GraphQL client for React, JavaScript, and native platforms.
  • ** Apollo Server** Translate your existing REST APIs and backends into GraphQL with this powerful set of tools for building GraphQL APIs.
  • Apollo Engine The GraphQL gateway that provides essential features including caching, performance tracing, and error tracking.

Chaos Engineering is becoming a discipline in designing distributed systems in order to address the uncertainty of distributed systems at scale.

Chaos Engineering can be thought of as the facilitation of experiments to uncover systemic weaknesses.

These experiments follow four steps:

  1. Start by defining ‘steady state’ as some measurable output of a system that indicates normal behavior.
  2. Hypothesize that this steady-state will continue in both the control group and the experimental group.
  3. Introduce variables that reflect real-world events like servers that crash, hard drives that malfunction, network connections that fail, etc.
  4. Try to disprove the hypothesis by looking for a difference in steady-state between the control group and the experimental group.

In essence -> The harder it is to disrupt the steady-state, the more confidence we have in the behavior of the system. And If a weakness is uncovered, we now have a target for improvement before that behavior manifests in the system at large.

Chaos Engineering was called as such mainly through Netflix’s Chaos Monkey

Read More @ https://principlesofchaos.org/

The conversational interface is the latest trend in the field of digital design that is focused on improving how people interact with systems. Industry leaders such as Apple, Google, Microsoft, Amazon, and Facebook are strongly focused on building a new generation of conversational interfaces.

Why? Conversational UI might very well be the next step in UI. You can see it when people use bots in slack, virtual agents on a web site, smart home devices and so on. Not following the evolution of this technology, will keep us in the dark on a vital subject.

Dart Dart is Google’s programming language that is optimized for the client-side. Many rich frameworks such as Flutter are using Dart as their development language.

Approach Dart is a light weight, object oriented language that utilises modern programming paradigms such as reactive functional programming. Dart syntax is very familiar to C#, C++, Java and Kotlin developers. Furthermore, Dart code is highly expressive

Why? Dart compiles to ARM and x86 code, apps that developed in Dart can run natively on Android and iOS

ES2017 are new standards that got into the JS language, partially supported by browsers and engines. New language features such as async / await, new browser API such as shared memory and so on.

** Why start using it?**

ES2017 standards are now supported in all browsers, most of them, if not all, have polyfills for IE who now, has required support only by relatively few projects, mainly because the last OS comes only with it, has ended its support.

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

  • GraphQL queries always return predictable results.
  • While typical REST APIs require loading from multiple URLs, GraphQL APIs get all the data your app needs in a single request.
  • GraphQL uses types to ensure Apps only ask for what’s possible and provide clear and helpful errors.

We suggest starting using this tool when it applies as an alternative to REST.

GraphQL Subscription In addition to fetching data using queries and modifying data using mutations, the GraphQL spec supports a third operation type, called a subscription.

Subscriptions are operations that watch events emitted by the ApolloServer and react to refetch the query to execute a mutation.

It relies on some events mechanism like Apollo’s PubSub, MQTT broker, Kafka, Redis, socket.io etc.

We would likely use subscriptions where intermittent polling or manual refetching is not enough, like a chat application that requires almost real-time updates.

HTTP/2 may speed up the website significantly! HTTP/2 provides us with many new mechanics that will mitigate HTTP/1.1 issues and ones that will boost the page performance.

HTTP/2 is the second major version of the HTTP internet protocol. Main benefits:

– Request multiplexing over a single TCP connection – Compression of request headers – Binary protocol – HTTP/2 Server Push

[Browser Compatibility] (https://caniuse.com/#feat=http2)

HTTP/2 demo

Why start using it?

HTTP/2 is now supported by all major browsers, it offers a significant improvement in network communication speed and security.

Flow-Based Programming (FBP) is a paradigm in computer engineering in which business logic is described as a network of “logical building blocks”. The main benefit of FBP is mainly the fact that the business logic CAN be visually developed this way by personals which are no engineers and not aware of the specific technology in which these building blocks are written internally. The development is done usually by visual editors and tools.

NodeRed is an FBP development tool for both software and hardware application. It provides a browser-based visual editor and a Node.js based event-based, non-blocking framework to implement building blocks. Flows can be deployed directly from the editor in a single click.

It’s already been a while since the complex application is written as MSA as a standard in many organizations, R&D teams deal more and more in adapting existing systems to rapidly changing business logic. NodeRed gives organizations the ability to non-technological staff define and update business logic to a live system with zero downtime, without involving R&D teams, giving them enough time and capacity to handle platform and frameworks issues.

Jest is a prominent Javascript testing Node library that is mainly used for unit & integration tests. Although it is a platform and framework agnostic (node/ browser, angular/vue/react) it is mainly associated with react.

Puppeteer is a Node library that provides an API for controlling and manipulating chromium-based applications via the chrome DevTools protocol.

** Why start using it? **

Mixing Jest & Puppeteer together - meaning running the tests with jest while using puppeteer to control and manipulate the chromium-based app (headless or not ) - gives us the ability to have a powerful e2e tool that is very fast, reliable and inlined with our unit and integration tests.

Micro-Frontends approach enables us to split our products into separate modules as any of them is built with any web technology (i.e. React/Angular/Vue/…). A thin code layer orchestrates them as a single product, keeping the UX intact. The approach enables companies to stale rewrites of old production code and combines new technologies with legacy ones without breaking everything.

Integrations between apps, products encapsulated in other products, etc. when apps are written in different frameworks/tooling

This architecture may release the organization from being bound to a certain technology even when obsolete. It brings the microservice world to the frontend, along with the challenge of SPA user experience

Why

  • For large-scale front-ends, it is possible to create hybrid products that are built of multiple frameworks/languages, each built and maintained separately.
  • When a product requires new features and built on top of obsolete technology, it can postpone or even remove the refactoring of old code while writing the new features in new technologies.
  • New products can be built in a polyglot-frontends design, for future compatibility, with little effort to initialize the project.

Why Try it? The POC to determine if the solution fits the problem is quick. The time saved compared to a rewrite is enormous.

Cons Requires senior developer(s) to handle the integration. There is no “out of the box” solution, every product needs its own variation of the solution.

Pros Money and time saver. Keep building your application without a painful setback of rewriting it.

Progressive Web Apps are user experiences that have the reach of the web, and are:

  • Reliable - Load instantly and never show the down sour, even in uncertain network conditions.

  • Fast - Respond quickly to user interactions with silky smooth animations and no janky scrolling.

  • Engaging - Feel like a natural app on the device, with immersive user experience.

This new level of quality allows Progressive Web Apps to earn a place on the user’s home screen.

** Why start using it? ** Having one code for both cellular apps and web apps has long been a desire for many application developers. PWA may get you there, at least for Google Fucsia OS, where PWA’s are going to be at the same status as Android apps, that, in addition to all capabilities of PWA, such as reliability, velocity and its look and feel.

more info

Hooks are a new addition in React 16.8. They let you use state and other React features without writing a class.

We like using hooks as they significantly reduce the amount of boilerplate code in the system. It also removes the dilemma of choosing the component type ahead of time and allows for easy functionality changes throughout the development of the app.

Rollup is a module bundler for JavaScript which compiles small pieces of code into something larger and more complex, such as a library or application. It uses the standardized ES module format for code, instead of previous idiosyncratic solutions such as CommonJS and AMD. ES modules let you freely and seamlessly combine the most useful individual functions from your favorite libraries. Rollup can optimize ES modules for faster native loading in modern browsers, or output a legacy module format allowing ES module workflows today.

Although browsers are running more and more applications, and fewer and fewer websites, server-side rendering is still important for search engines, content previews when sharing apps, and it still makes our page load faster where static content is involved.

There are techniques for rendering content on the server-side which involves running client code on the server, for SEO and initial page load, which we may recommend to start using.

Stencil is a Compiler for building Web Components that can be used in a JavaScript Projects(Angular, React, Vue) or in a vanilla project. The produced code also includes:

  • A tiny virtual DOM layer
  • Efficient one-way data binding
  • An asynchronous rendering pipeline (similar to React Fiber) Lazy-loading

Why Stencil is a popular library, backed by plenty of community tutorials and answers around the web.

Pros Supports SSR + Hydration (Though all web components are natively capable of SSR + Hydration).

Cons Since stencil is a compiler, your code is being transformed, alongside a mandatory Virtual-DOM runtime that is required to run a single component. Stencil components cannot run without the supporting virtual DOM.

Frontend development deals with creating user interfaces. Confirming that the user actually sees what we (and the designers) expect him to see is crucial. This is even more important when we are using a shared component library where a visual error in one component might impact a vast amount of systems.

Embedding visual testing, as part of the organization’s automation process, ensures that nothing changes unintentionally and when a change is made, one might integrate with a UI/UX designer team, which ensures high quality with minimum effort.

Visual testing might be done with e2e tools such as cypress/puppeteer & Jest and/or components libraries such as Storybook / Styleguidist. Why start using it? Visual testing answer the long time requirement of making sure your application looks as it should have. While unit, integration or e2e covers the functional parts, visual testing covers the, well, the visual part, but it can do so automatically.

Composition API is the third (and hopefully last) API type to create Vue components. It provides a better way to separate your code into concerns/compositions logics, instead of components logics.

explained: https://www.vuemastery.com/courses/vue-3-essentials/why-the-composition-api/ https://vue-composition-api-rfc.netlify.com/

Why start using it?

Applying Vue composition API to your Vue application can reduce your code duplication because of the ability to write shared compositions logics (compositions) and use them across several components. It has a more elegant solution than the previous Vue solution (Mixins) and has an API that can maybe convince React users to join the Vue club because it’s very similar to React hooks.

Logo

Since HTML5 has come out, and the browsers have stopped running plugins, hosting non-javascript code in the browser may come alive with WebAssembly.

(From https://webassembly.org) WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable target for compilation of high-level languages like C/C++/Rust, enabling deployment on the web for client and server applications.

Why? We want to promote web assembly since it enables heavy processing on the browser via an asynchronous parallel thread. it can be used to analyze and process almost everything including heavy graphics.

WebRTC is mainly about peer to peer communication, with audio, video, and data. It is a very powerful way of communication laying inside the browser sandbox which makes it secure.

Why start using it?

We recommend using it for every task require a peer to peer communication which isn’t required to be fully recorded by the server. The server may be used for the discovery mechanism. It can save you unnecessary networking and therefore money.

Try

The JAMstack (Jekyll, Hugo, Nuxt, Next, Gatsby) is not new however it’s popularity has increased recently due to increased maturity, better hosting solutions (e.g. Netlify, Github pages). While Jekyll is declining in popularity (Ruby-based), we see more and more companies opting to use Gatsby (React), Next (React), and Nuxt(Vue) as a basis for their SPA or content systems, thanks to a rich ecosystem of plugins, ready to use starter projects and lots of documentation and examples. Using a JAMstack framework as a basis for an application allows developers to focus on business logic and content while saving time and effort on setup and basic configuration.

Cypress is a framework for fast, easy and reliable testing for anything that runs in a browser. It is a one-stop-shop that can do e2e, integration, and unit testing.

Cypress is a JavaScript-based end to end testing framework that doesn’t use selenium at all. It is built on top of mocha which is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun. Cypress also uses Chai - a BDD / TDD assertion library for node and the browser that can be delightfully paired with any JavaScript testing framework.

Why should we try it?

Currently, Cypress only works in Chrome/chromium so for projects that require e2e tests in other browsers it is not an ideal solution. However, if your target browser is Chrome and this is definitely a very good solution for e2e, integration and unit testing of web apps

Progressive Web Apps for desktop (PWA) may become THE way to develop hybrid applications for both mobile and web and now also desktop applications.

Starting in Chrome 73, Progressive Web Apps are now supported on all desktop platforms, including Chrome OS, Linux, Mac, and Windows.

The advantages of leveraging PWA for desktops are quite plain - it easily allows for code sharing with browser-based web-apps with minimal modifications to support desktop runtime.

** Why should we try it?** We recommend considering this approach as a means to share code with web applications and as an easy-to-employ alternative to NW.js (previously known as node-webkit), Electron and other Nodejs based desktop frameworks.

In this new agile world, many people question the role of architecture. And certainly the pre-planned architecture vision couldn’t fit in with modern dynamism. But there is another approach to architecture, one that embraces change in the agile manner. In this view architecture is a constant effort, one that works closely with programming so that architecture can react both to changing requirements but also to feedback from programming. It is called “Evolutionary Architecture”, to highlight that while changes are unpredictable, architecture can still move in a better direction.

We put it on our Try category, as it brings new principles to the architecture world: “Bring the Pain Forward” and “Last Responsible Moment”

Lit-element Lit-element is a polymer project, to write custom elements, along with a data binding mechanism.

Lit-element is built upon lit-HTML and it is much liter than component-based framework/libraries such as Angular / React / Vue.

The main advantage of using Lit-element is that it enables you to continue writing code close to the browser, using its API, and save the time of writing your own change detection mechanism,

Why? Lit-element/lit html is, right now, one of the most popular libraries to write custom elements with, its future may be determined by the market tend to use the web platform and use the heavy frameworks/libraries less.

Prisma aims to replace traditional ORMs with a GraphQL server for querying data. It automates many DB related operations and exposes a standard GraphQL endpoint, that can be queried directly or through other Graphql endpoints such as Apollo Server.

The most interesting aspect of Prisma is the ability to generate typed client code from existing databases including the models and relations, hence significantly speeding up development processes while keeping the code typed and safe.

Currently, Prisma has driver support of most major databases including cloud DBs (e.g. DynamoDB), and most common programming languages( Javascript, Typescript, Golang, Python, and Java).

I strongly recommend trying it out as a new approach to ORM as it has an interesting concept, an active community, and good documentation.

Background sync is a new web API that lets you defer actions until the user has stable connectivity. This is useful for ensuring that whatever the user wants to send, is actually sent.

We might want to promote this API Since it is another great way to support offline and improve the UX. The API is pretty simple and gives great value.

Why Background sync API can be used as a great solution for connectivity issues that can contribute to user experience.

Svelte is a radical new approach to building user interfaces. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app.

Instead of using techniques like virtual DOM diffing, Svelte writes code that surgically updates the DOM when the state of your app changes.

We can try to use Svelte to achieve performance benefits

Stop

Apache NiFi is a software project from the Apache Software Foundation designed to automate the flow of data between software systems. It is based on the “NiagaraFiles” software previously developed by the NSA, which is also the source of a part of its present name – NiFi. It was open-sourced as a part of the NSA’s technology transfer program in 2014.

The software design is based on the flow-based programming model and offers features which prominently include the ability to operate within clusters, security using TLS encryption, extensibility (users can write their own software to extend its abilities) and improved usability features like a portal which can be used to view and modify behavior visually.

Flow Oriented Programming is a concept in software development in which software engineers develop chunks of low-level business logic by writing code (such as Go, Javascript or Java) and a flow editor in which every chunk is represented as a graphical block. This way other company non-technological persons are able to create their own high-level business logic by using and connecting low-level blocks.

Why should we stop doing this? NiFi didn’t succeed eventually. Probably this is a too complex platform for a rather simple task.

Flume is a tool for collecting/streaming data to hdfs, used mostly for logs. we put it in stop since other alternatives caught on. like: Fluentd, Logstash, etc…

GitFlow is a branching model for Git, created by Vincent Driessen. Although it states: “…very well suited to collaboration and scaling the development team” we have come to the conclusion that it is very hard and tedious to maintain with many problems most of which due to its high complexity. One of the major problems is that facilitates many merges on many different levels. Also, all merges are humongous as most branches are long-lived and are not updated necessarily infrequent intervals. Many merges that are performed are also not done by the code owner himself but by others which can lead to major problems.

Stop using this model and use an easier branching model. Everyone will benefit from this.

Groovy is an object oriented language which is based on the JVM. It is both a static and dynamic language. It can be used as both a programming language and a scripting language.

We recommend stopping using Groovy except for in tools such as Jenkins and/or Gradle. The reason for this is for one, the loss of popularity for this language and a lack of community. Also, is a much slower language as java itself or other JVM based languages. Furthermore, there are other more popular Languages based on the JVM which has a much larger community and have a much higher adoption rate. The communities are growing and the language is evolving at a high pace. All of this is not happening with Groovy…

As of now Java 13 is out. Java 8 is EOL unless you pay for LTS support. Staying with Java 8 means that you are enlarging your technical debt which will be much more difficult to overcome in the future.

We recommend to stop using this version and get on the new Java release train. Start using the newer versions of Java.

Mongoose is a MongoDB object modeling tool, even though it is popular - data typing has many downsides on NoSQL database. Recent conventions aims to drop all ODM/ORM tools when working with MongoDB, a good article: https://towardsdatascience.com/the-drastic-mistake-of-using-mongoose-to-handle-your-big-data-a3c408e21a4c

Why? A short list of reason why to stop using Mongoose:

  1. Another layer to learn, with all the idiosyncracies, special syntax, magical tags, and so on. This is mainly a disadvantage if you’re already experienced with SQL itself.
  2. Even if you’re not experienced with SQL, there is a vast bank of knowledge out there and many folks who can help with answers. Any single ORM is much more obscure knowledge not shared by many, and you will spend considerable amounts of time figuring out how to force-feed it things.
  3. Debugging query performance is challenging, because we’re abstracted one level further from “the metal”. Sometimes quite a bit of tweaking is required to get the ORM to generate the right queries for you, and this is frustrating when you already know which queries you need.

A software monolithic architecture means that all the business logic is written in one piece, in many cases - in a single executable. This is an opposite to a Micro Services Architecture (MSA) where well defined pieces of logic are separated to different, stand-alone services, communicating via Network Techniques (HTTP requests, Sockets, Queues, etc)

We should stop leading monolithic architectures because of several reasons:

  • MSA applications consist of different services, each one is a significantly smaller, simpler codebase. Much easier to maintain.
  • in MSA Each service may serve many clients, and be used in the future for different purposes.
  • In MSA each service can be written in a different language, regardless of the technology of other services.
  • In MSA, each service can be fully developed in a different team, in another place in the world. Only interfaces matter.
  • In MSA each service has its own life cycle and can be updated independently.

All of the above is not the case in a monolithic application, therefore we should no longer lead to this approach.

Python is a powerful high-level, interpreted, open source, object-oriented programming language, portable, extensible and embeddable with simple syntax and large standard libraries to solve common tasks. Python is a general-purpose language. It has a wide range of applications from Web development (like Django and Bottle), scientific and mathematical computing (Orange, SymPy, NumPy) to desktop graphical user Interfaces (Pygame, Panda3D).

Why? Python 2.7 will not be maintained past 2020. Originally, there was no official date. Recently, that date has been updated to January 1, 2020. We suggest stopping using this version, especially for new projects.

It seems like the utilization of Ruby as a software development language which was very popular in the DevOps movement mainly with tools like Chef and Puppet Logstash Fluentd and a lot of scripting and utilities around ruby and ruby on rails application lifecycle such as Capistrano, has taken a punch in favor of python go and javascript. With the rise of popularity of these langs & frameworks, we see less and less use of ruby.

We assume that as there will always be something written in ruby but it will most definitely not be the language of choice when we are required to develop a utility / micro-app.

Scala combines object-oriented and functional programming in one concise, high-level language. Scala’s static types help avoid bugs in complex applications, and its JVM and JavaScript runtimes let you build high-performance systems with easy access to huge ecosystems of libraries.

Due to the over academization of the language and abundance of features, it has had low adoption.

Our current position is that scala has moved to the maintenance only, and new projects should shy away from using scala

Weka is a java based ML library for ML. It contains classifiers and regressors along with some feature engineering tools.

Why? Java-based ML is less and less used throughout projects. In the world of Microservices, it is more efficient to perform ML in Python and leave the other tasks for Java., where it excels.

Keep

In the world of different programming paradigms (object-oriented, functional), the actor model does not get enough attention. The actor model is a conceptual model to deal with concurrent computation. It defines some general rules for how the system’s components should behave and interact with each other. The most famous language that uses this model is probably Erlang.

In cases that multithreading is needed, the actor model should be the default and standard solution

Airflow is today the most appreciated workflow engine in the distributed world. It allows building modular workflow with high scale, triggering, forking and joining and many more features. In addition, a good GUI that helps to audit the workflow.

This framework is run on Python, which makes it very attractive to DevOps groups.

The ramp-up seems easy in the beginning but requires deep knowledge on later phases. the solution is also based on other Python frameworks and databases for the queue management and databases, and for the webserver. Celery, RabbitMQ, MySQL or Postgres

Some drawbacks of this solution are the fact that all configuration is deployed and Python code. This may cause one bug to halt the whole of the server. Another problem with Python is the different versions that may be installed and updated on the different servers and may not be managed properly and cause bugs in Airflow itself.

Why? Very few systems do not have the need for scheduling tasks. Airflow has become the main tool to solve this issue. Most could providers have a SAS solution for airflow, so we will see it becoming even more mainstream

Akka is a free and open-source toolkit and runtime simplifying the construction of concurrent and distributed applications on the JVM. Akka supports multiple programming models for concurrency, but it emphasizes actor-based concurrency, with inspiration drawn from Erlang

Why? Akka is about the only full actor model in the Java ecosystem. If you plan of creating a multithreading application, Akka is a must framework to use.

Kinesis - Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. Amazon Kinesis enables you to process and analyze data as it arrives and responds instantly instead of having to wait until all your data is collected before the processing can begin.

Why? Because sometimes it is needed to deploy Data BUS for Data pipeline as a service instead of installing a complex distribution of Data BUS

Apache Beam is an open-source, unified model for defining both batch and streaming data-parallel processing pipelines. Beam supplies an SDK in different languages (java, python, scala) to express the flow and processing of the data). The SDK then applies the runner according to the chosen platform (spark, Flink, gear…). Beam aims to lead the industry on the proper way to deal with batch and streaming in the same pipeline. In addition, Beam addresses issues like process time vs event time and handling late data.

Why? Apache beam is now a leading force for the whole area of streaming. With its unified SDK, it is forcing competitor frameworks to relook into what they have to offer. In addition, google dataflow hosted beam is a great SAS service

Apache Camel is an open-source integration framework based on known Enterprise Integration Patterns. It can also be described as a “mediation router” or a message-oriented middleware framework. Camel has adapters for many popular APIs such as Apache Kafka, monitoring AWS EC2 instances or event integrating with Salesforce. Integration routes are written as pipelines which create a totally transparent picture of data flows.

Apache Camel is a versatile age proven tool to solve a complicated problem that we recommend using for its purpose.

The core of Apache Flink is a distributed streaming dataflow engine written in Java and Scala. Flink executes arbitrary dataflow programs in a data-parallel and pipelined manner. Flink’s pipelined runtime system enables the execution of bulk/batch and stream processing programs. Furthermore, Flink’s runtime supports the execution of iterative algorithms natively. Flink provides a high-throughput, low-latency streaming engine as well as support for event-time processing and state management. Flink applications are fault-tolerant in the event of machine failure and support exactly-once semantics

Why? Streaming systems have become part of almost every project. For batch, a processing spark is a great tool. But as more system moves to streaming Flink brings a better SDK model for dealing with an event to event processing.

AWS was the first provider to offer functions as a service already in November 2014. AWS initially released AWS lambda as an event-driven provisioning/operations and took just under 3 years to become the standard name of serverless and FaaS offerings.

AWS, as it’s competitors, offer Lambdas (a.k.a functions), as a complementary to their BaaS offerings, stitching together services such as:

  • Incognito
  • S3
  • CloudFormation
  • DynamoDB
  • RDS and many more

These integrations alongside Lambda’s “infinite” scalability and it’s newly introduced (at the time) “Price per 100ms” made it very popular among both startups achieving their MVP and enterprises wishing to scale out or experiment with Serverless and Micro Services Architectures.

AWS Lambda provides many organizations the ability to write functions in a variety of programming languages and integrates well with many frameworks and other IaaS/PaaS/BaaS services.

More about AWS Lamda here

Being a major part of “Serverless” architecture this should also be part of our “Keep” advise, as Lambda provides a great way to cost-effective deploy new service on the cloud.

Google BigQuery is considered as a Data Warehouse as a Service. Unlike traditional Data Warehouse that was based on RDBMS, Google BigQuery is based on large read-only datasets. Originally, with no indexes.

Google took advantage of Dremel, its big data solution, and implemented it as a service. Originally, the aim was to provide big data analytics as a service that provides very high performance by leveraging the Google cloud. Today, Google brand it more as a Data Warehouse as a service that can replace Data Warehouses that were managed on-premise.

The big benefit of BigQuery over RDBMS is that you do not need to know the underlying architecture or use a DBA in order to use it. Just upload your data and start using it. But the main attraction of BigQuery is that it is a Big Data solution on the cloud. That means that you can scale your data without the need to worry about performance.

Many companies have already adopted this DB as its Data Warehousing solution and we suggest to keep it this way.

The most popular tool for (a)synchronous task execution in Python projects as it adds multiprocessing capabilities to Python application. It is typically used with a web framework such as Django, Flask or Pyramid and adds extremely flexibility by way of multiple result backends, nice config format, workflow canvas support.

We are recommending to use this tool when you need to process distributed tasks in Python, it works well both locally and at large scale (using Rabbitmq as backend)

Cloud Native Solution With the broad adoption of SOA and Micro Service Architectures and many IaaS, PaaS offerings one of the “lessons learned” is that you need to be able to harness your infrastructure and start treating it in a standardized way.

So What is Cloud Native? The Cloud Native Computing Foundation (CNCF) describes it as “distributed systems capable of scaling to tens of thousands of self healing multi-tenant nodes”. That’s a “how” (distributed systems) and a “why” (high scaleability and automated resilience).

All the above in a complex/polyglot world will only be possible with standards and tooling to enable all these moving parts to suit small to large scale microservice based applications - This is what the Cloud Native Movement and the CNCF will promote in the years to come.

The following principles drive Cloud Native solutions:

  • Treat your own / cloud / hybrid infrastructure as-a-service: run on servers that can be flexibly provisioned on demand.

  • Microservices architecture: individual components are small, loosely coupled.

  • Automated deployments and continuously integrated and test -> replace manual tasks with scripts or code.

  • Containerize: package processes with their dependencies making them easy to test, move and deploy.

  • Orchestrate: use standard / commonly used / battle tested orchestration tools.

Cloud Service Providers As of today, the cloud service providers play a major role in all startup companies and even in big companies. This gets the programmers to the point where knowing Linux and ops utilities is not enough. Developers are expected to be experts in the different cloud providers in order to succeed with their tasks.

Each cloud provider comes with new platforms, techniques, API’s and databases.

CQRS is an alternative to the traditional multi-layer system design. It separates behavior and read models and forces the business logic to be developed decoupled from data provisioning.

The main benefits of CQRS:

  • separates scalability for reads and writes
  • simplifies concurrency and locking management
  • increases maintainability
  • fits well event-driven and microservices architectures

We are advising to use CQRS because this approach helps to keep your project healthy in the long-term and solves many of scale and concurrency issues.

Deep learning is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, semi-supervised or unsupervised.

Deep learning architectures such as deep neural networks, deep belief networks and recurrent neural networks have been applied to fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design, medical image analysis, material inspection and board game programs, where they have produced results comparable to and in some cases superior to human experts. Many of the nowadays companies, startups, and large corporates, incorporate Deep Learning capabilities in their product. At Tikal, we should enhance our knowledge in deep learning to provide our customers with methods and tools to add deep learning capabilities to their portfolio.

In general, the DDD methodology focuses on 2 areas:

  1. Constructing a conceptual design that “describes” the business and product in a good way, from a Software point of view. It is made of strategic and tactical planning.
  2. Ideas for design and frameworks related to microservices, they are good to know but did not prove themselves in practice.

We should keep it, as it is another approach to OOD that can be friendly to people in the target domain (including nontechnological people)

Functional Programming is a programming paradigm that becomes more popular in recent years, it has many benefits for even-driven architectures and other state/contexts handling/processing problems.

Why? Object oriented has dominated the market for a lot of years. As we move into the era of big data, we have the need for code that can be distributed on multiple processors or even machines. For this, we need a paradigm that limits the side effect, and this can be done beautifully with functional programming.

Consent The conditions for consent have been strengthened, and companies are no longer able to use long illegible terms and conditions full of legalese. The request for consent must be given in an intelligible and easily accessible form, with the purpose of data processing attached to that consent. Consent must be clear and distinguishable from other matters and provided in an intelligible and easily accessible form, using clear and plain language. It must be as easy to withdraw consent as it is to give it.​

The Problem Certain technologies like blockchain and Kafka do not allow or are very difficult to remove old data from the system. Therefore we must deploy different types of data architects so that we can address the GDPR issues, such as encryption with an option of deleting the decryption key

Why? Due to laws of the GDPR, the proven methods of dealing the immutable data needs to be taken into consideration from day one of every project

Robust, Age Proven, and Renewing The Java language although being with us already for a long time is renewing itself. Starting from Java 9 we now see a new version of Java every 6 months. This means more features much faster. I still believe that using Java gives a very strict and obvious structure to the program which is sometimes lacking in other programming languages.

Framework Support The rise if Reactive Frameworks (Vert.x, Spring Reactor and others) and Micro frameworks (Spark Java, javalite, ligth4k) make java very versatile in its use and use cases. With the above-mentioned frameworks, we can now easily program reactive systems and microservices with a minimal footprint. Java is always going with the Trends (although sometimes a bit behind) as we can see with Docker support in the JVM just recently.

The Java community is a huge community one which cannot be neglected.

We recommend using this language as it has one of the largest eco-systems of any language, a vibrant community and it is actively maintained. It is very versatile, easy to use and learn and fits many applications.

The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations, and narrative text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more This tool serves as an IDE for the majority of data scientists. It is the inspiration for big web-based IDE projects such as Google DataLab and Apache Zeppelin. As part of the polyglot effort in the Backend team and our route towards Machine Learning, this tool should be on our radar for daily uses.

Apache Kafka is a distributed streaming platform.

Kafka has three key capabilities:

  • Publish and subscribe to streams of records, similar to a message queue or enterprise messaging system.
  • Store streams of records in a fault-tolerant durable way.
  • Process streams of records as they occur.

Kafka is generally used for two broad classes of applications:

  • Building real-time streaming data pipelines that reliably get data between systems or applications
  • Building real-time streaming applications that transform or react to the streams of data

A more comprehensive definition could be read here

Why? Kafka is a reliable tool to quickly ingest data and serve it forward, with the addition of tools such as KSQL and Kafka streams Kafka is becoming a more comprehensive platform for a data platform.

Kafka Streams is a client library for building applications and microservices, where the input and output data are stored in Kafka clusters. It combines the simplicity of writing and deploying standard Java and Scala applications on the client-side with the benefits of Kafka’s server-side cluster technology.

The library allows developing stateful stream-processing applications that are scalable, elastic, and fully fault-tolerant. The main API is a stream-processing DSL that offers high-level operators like filter, map, grouping, windowing, aggregation, joins and the notion of tables

It will interesting to see how Kafka streams will compete in the spark and other streaming worlds

Why? As Kafka has become more mainstream and fairly standard in a lot of systems, the streaming and aggregation aspects of the system should be part of the same system, and not on another layer such as Spark.

Kaggle is a platform for one of the largest community of data scientists and machine learning engineers. On Kaggle you can find and investigate high-quality data sets, explore and build models with that data on web-based data science environment, connect and learn with other data scientists and machine learning engineers, and participate in enterprise awarding ML competitions.

Why? As Kaggle is a leading way to connect with the community, learn techniques, find data sets and models, and share our knowledge, we recommend keep using it as a source for all of this.

Keras is a high-level neural networks API, build on top of TensorFlow, Theano or CNTK engines. It aims at fast experimenting in various topologies of neural networks. As opposed to the complicated API of TensorFlow, Keras is easy to understand, allowing a straight-forward code of neural network construction. These characteristics make Keras a first choice library when building networks such as DNN, CNN, and RNN.

Why? Keras is here to stay. This library, its simplicity, and its power will continue to be an important tool for data scientists. Its importance is embodied in the fact that TensorFlow incorporated it as a first-class API.

Kubernetes is an open-source system for automating deployment, scaling and management of containerized applications in a cluster.

Kubernetes uses a set of APIs and YAML/JSON based configuration language to define and maintain all aspects of containers running on a cluster. Including networking, service discovery, proxying and load balancing, horizontal and vertical scaling, security and more. Kubernetes as a service is part of all major cloud providers offering and there are some projects that can deploy Kubernetes cluster automatically on almost every computing environment. Kubernetes introduce the concept of a POD - a set of one or more containers that deployed as a single unit. (Same node, same namespace, and same network configuration) PODs can be thought of as lightweight servers constructs from container images. PODs can be deployed using controllers that define the behavior of the POD in the cluster. Commonly used controllers are the ‘Deployment’ controller that defines a replica-set to make sure a given number of POD instances will be available at any moment of time. And ‘DaemonSet’ controller that deploys one POD per running worker node in the cluster. Services, running as PODs can be exposed, internally to the cluster or externally to the world, via ‘Service’ configuration object that acts as a reverse proxy and simple load balancer that provide a single endpoint for the service. All configuration objects (PODS, Controllers, Services, etc…) are loosely coupled via tags and selectors that make the infrastructure flexible and configurable.

Managed Services is a name for cloud-based systems that provide services and is managed automatically by cloud providers. For instance, when a company has a need for a service such as a Database, it can install and maintain it on its own or buy a service running and maintaining the database by a cloud provider. Such an example is a self-maintained MySQL versus RDS by Amazon or Cloud SQL by Google.

Such managed services save a lot of time and human resources both in the infrastructure initiation phase and during the maintenance period. It also provides higher SLA, as the managed services are usually backed by big trained teams and a well-proved highly maintained infrastructure. The cost of such services is relatively high to a self-maintained solution, but there is a decrease in cost over the past several years. Also, using such services reduce the cost of human resources for the organization, so it is eventually a cost-effective solution.

The development teams around the world are required more and more to show experience in using such services. For instance, many job descriptions require specific cloud services experience such as AWS and GCP based tools. Being a managed service experienced developer increase the chance of getting hired these days.

Apart from the old monolithic paradigm of software engineering, the Micro-Service-Architecture (MSA) or Service-Oriented-Architecture (SOA) orients software engineers to break apart their software design to different self-contained, independent services, maintained by small teams of no more than 4 engineers, on a rather small codebase.

This new approach makes it possible to design logic parts of the entire application as more generic services, for the future use of other clients.

For more info and best practices about MSA design, refer here: https://12factor.net/

Why? MSA has long since become an industry-standard in writing complex applications as a bunch of simple services. This simple, language-agnostic coding gives organizations more agility and the ability to quickly deliver changes.

As an asynchronous event-driven JavaScript runtime, Node.js is designed to build scalable network applications. This is in contrast to today’s more common concurrency model, in which OS threads are employed. Thread-based networking is relatively inefficient and very difficult to use. Furthermore, users of Node.js are free from worries of dead-locking the process, since there are no locks. Almost no function in Node.js directly performs I/O, so the process never blocks. Because nothing blocks, scalable systems are very reasonable to develop in Node.js.

We use Nodej.s in DevOps -

  • Write small tools or in serverless framework, lambda tasks (gluing processes)
  • But I wouldn’t say it’s the main language in “DevOps domain”

OpenJDK is an open-source implementation of the Java Standard Edition platform with a contribution from Oracle and open Java community. It is distributed under the GPL License. As of Java 11, Oracle has changed their license to a proprietary one. Usage of the Oracle JDK is not free anymore. OracleJDK and the OpenJDK are basically the same. More than that; both JDKs are built from the same source code and undergo the same rigorous tests and testing cycles. Previous commercial products which were solely in the Oracle JDK are now included in the Open JDK as well. So fundamentally, both JDKs are the same, so why pay Oracle for their JDK if we can have the same JDK for free?

We recommend using this distro from version 11 on as it is the free version of Java.

Pandas is a python software library for analysis and manipulation of data. It provides a plethora of mathematical and analytical tools around data. Today, it is one of the most popular data manipulation libraries in the world. It goes hand-in-hand with data science, AI and machine learning fields.

More and more companies and projects around the world are switching to Python to enjoy the benefits of this library. Along with NumPy, they are the leading software libraries in the data science world. Gaining knowledge and experience in Pandas opens great opportunities in the current software industry.

Polyglot programming is the practice of writing code in multiple languages to capture additional functionality and efficiency not available in a single language. The use of domain-specific languages (DSLs) has become a standard practice for enterprise application development. For example, a mobile development team might employ Java, JavaScript and HTML5 to create a fully functional application. Other DSLs such as SQL (for data queries), XML (embedded configuration) and CSS (document formatting) are often built into enterprise applications as well. One developer may be proficient in multiple languages, or a team with varying language skills may work together to perform polyglot programming.

Why? With the architecture of micro-services and platforms such as GraalVM, it is becoming more and more rewarding to choose the right language for the right service.

Python is a powerful high-level, interpreted, open-source, object-oriented programming language, portable, extensible and embeddable with simple syntax and large standard libraries to solve common tasks. Python is a general-purpose language. It has a wide range of applications from Web development (like Django and Bottle), scientific and mathematical computing (Orange, SymPy, NumPy) to desktop graphical user Interfaces (Pygame, Panda3D).

This language is one of the tops used languages today and its benefits are well known, especially in the data engineering industry and AI. We suggest you keep using this language for its intended purposes

Tornado is a Python web application framework, asynchronous networking library. It has a scalable, non-blocking web server

Tornado is an open source tool with 18.5K GitHub stars and 5.1K GitHub forks

Tornado is noted for its high performance. Its design enables handling a large number of concurrent connections (i.e., tries to solve the “C10k problem”).

Documentation is clear and easy-to-read Tornado has out-of-the-box WebSockets support, authentication (e.g. via Google), and security features (like cookie signing or XSRF protection) Another advantage of Tornado is its native support for social services.

Tornado is integrated with the standard library asyncio module and shares the same event loop (by default since Tornado 5.0). In general, libraries designed for use with asyncio can be mixed freely with Tornado.

We recommend Tornado because of it very good for long polling, WebSockets, and other applications that require a long-lived connection to each user. If you want to write something with Django or Flask, but if you need a better performance, you can opt for Tornado.

Reactive Programming is a paradigm that changes the direction of the flow. Instead of creating threads and blocking for communications, we create pipelines where data is pass through and each layer can react to the data received. The indirection changes the code flow from blocked to async.

Over time the reactive extensions have become a standard for adding higher level functionality on the stream of data while being agnostic to the data itself. The extensions are based on the observable pattern and allow for cancellation of the data process in stream

Why? Reactive programming has been around for a while but has only started to hit the mainstream. Most popular frameworks like vertx and spring have support for reactive programming. For those looking for performance, reactive programming is a must.

Scikit-learn is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy. It contains tools for data manipulation necessary for machine learning project and even if one is using other Python-based libraries such as Keras or TensorFlow, she probably finds herself using one of the tools provided by Scikit learn.

As we are advancing towards Machine Learning, Scikit learn is one of the basic tools to master.

Serverless computing is a cloud computing execution model in which the cloud provider dynamically manages the allocation of machine resources. Serverless is a form of utility computing that helps provide complex infrastructure and security requirements as a part of managed services these services are managed by the provider hence the name serverless { less management of servers }. Serverless should not be confused with the serverless framework …

Serverless framework Serverless.com is a toolkit for deploying and operating serverless architectures, serverless utilizes the different cloud provider’s functions and API-gw API’s to expose backend services.

Highlights:

  • Integrates with main cloud providers - AWS, GCP, Azure
  • Has Kubeless plugin
  • Yaml definition of both API gateway and lambdas
  • Utility to test functions locally using a local docker daemon

We should put it on “Keep”, as many customers work with this Succesful architecture trend, which provides cost-effective architecture from both technical and business levels.

Spark MLlib is a Machine Learning library that comes on top of Apache Spark. It is the best solution for applying Machine Learning algorithms on big amounts of data that is stored in distributed file systems, like HDFS or in the cloud.

we promote Spark MLlib as a library for Machine Learning as it is part of the ecosystem of Spark and is most suitable for Big Data.

Spring Cloud provides tools for developers to quickly build some of the common patterns in distributed systems. Coordination of distributed systems leads to boilerplate patterns, and using Spring Cloud developers can quickly span up services and applications that implement those patterns. They will work well in any distributed environment, including the developer’s own laptop, bare metal data centers, and managed platforms such as Cloud Foundry.

Why? Spring cloud allows the developers, especially those familiar with the Spring framework and lingo, to address in one place the needs and patterns raised by a distributed system.

Spring 5 introduces Reactive programming into its framework, in the core and web libraries. Also introducing WebFlux as a reactive alternative to WebMVC. This allows developing a reactive, i.e. asynchronous and non-blocking, Spring application. Mastering Spring 5, will:

  1. Allow adding reactive capabilities to existing Spring applications
  2. Explore more reactive libraries and options, when facing the need to choose a relevant framework.
  3. Allow backend developers to advance to working with Spring 5, and fully understand it’s capabilities, advantages, and disadvantages.

Why? Reactive programming is becoming more and more common, Spring 5 and WebFlux allow programmers familiar with Spring to develop reactive systems with the Spring framework.

Machine learning on mobile devices is offering an array of new and unique user experiences for problems that are close to data people are working on. Experiences that were impossible before, like OCR, image recognition, machine translation, speech recognition, etc, were in the realm of Science Fiction, but they are being brought into our daily lives. Thanks to machine learning on mobile and IoT devices, universal translators from Star Trek, Babel Fish from Hitchhiker’s Guide to the Galaxy, “neural net processor, a learning computer” from Terminator 2, and more, are not a figments of writers’ imagination, but actual products that you can buy, or are in development stages.

TensorFlow is an opensource library for dataflow programming, often used in deep learning. It was developed for internal use by the Google Brain team. It was released to the public, and in 2017 mobile/lite version was released that is targeting mobile and embedded/IoT platforms, such as Android, iOS, Rasberry Pi, etc.

Why we should keep using it? Prediction in mobile devices is already incorporated in most of the mobile devices. Having a lite version in TensorFlow gives another important tool for such causes.

Trunk Based Development is a branching model where developers collaborate in a single branch called ‘trunk’ (master in Git). No other long-lived development branch is created and used, and employees create personal short-lived branches for their current work. Small, frequent and incremental merges into the trunk are the resulting and desired workflow. Thus merge hell is avoided, much fewer (if any) build breaks, more tests are run in the CI and overall everyone lives happily ever after.

Over time, for small and even for big teams this model is so easy to maintain there is no confusion and the merges are always done only by the owner of the code. Personal responsibility is demanded of those team members.

This branching model is in keep for its simplicity and ease of use.

Typescript has become popular once again since the Angular team has chosen it as their main programming language. Many TS standards were embraced in ES2015-ES2018 and it has become heavily used in almost every javascript framework.

For larger projects, where you work as a team, we recommend preferring TypeScript on the backend over JavaScript. The reasons are:

  1. You allow the IDE to acknowledge errors in the use of classes and functions that would only be perceived at runtime.
  2. When we define types, the IDE is able to relate objects and functions to the files that gave origin to them.
  3. Catching errors before you even run your app
  4. Reduce bugs by 15%

Typescript is already been used by many projects and customers as the implementation language on the backend, especially on the FaaS (i.e. AWS-Lambda) world.

Vert.x is a toolkit to build distributed reactive systems on the top of the Java Virtual Machine using an asynchronous and non-blocking development model. As a toolkit, Vert.x can be used in many contexts: in a standalone application or embedded in a Spring application. Vert.x and its ecosystem are just jar files used like any other library: just place them in your classpath and you are done. However, as Vert.x is a toolkit, it does not provide an all-in-one solution but provides the building blocks to build your own solution.

We highly recommend this micro-framework as it is the fastest framework around with a very low footprint.

Apache Zeppelin is a notebook for managing and visualizing Data and Data-lake. With Apache Zeppelin, one can perform Data Ingestion, Data Discovery, Data Analytics and also have Data Visualization & Collaboration. Apache Zeppelin is a Java runtime and supports the following languages and frameworks: Scala as Apache Spark, Python, JDBC, Markdown and Shell. It also supports many plugins The results of the queries can be shown in nice built-in visuals like graphs, pie charts and more.

Apache Zeppelin supports many additional interpreters that can be loaded or integrated using Helium, Among those interpreters are for example Cassandra, Elasticsearch, Flink, Ignite, bean, angular and many more

Why? Zeppelin is still the best tool out there to experiment in Spark, and as such it makes Spark accessible for Data Scientist (via PySpark).

Start

Apache Pulsar is an open-source distributed pub-sub messaging system originally created at Yahoo and now part of the Apache Software Foundation

Why? Due to the mainstream of Kafka, apache pulsar is a very good alternative that tries to take the concepts of Kafka to the next level. The product is already in production, and therefore we should start to use it

Delta Lake is a storage layer that brings ACID transactions to Apache Spark and big data workloads. It allows storage options and operations that were missing since the start of the BigData era, mostly transaction-related options and storage updates, such as add, merge and delete.

The BigData world has sacrificed attributes such as ACID and updates for the sake of speed and volume. But, since the growth of computing power, libraries, and techniques, those important attributes can be used again, along with the challenge of ever-growing data volumes. Delta Lake is an important endeavor, lead by DataBricks, to combine the important attributes of the traditional relational world with the power brought by BigData and parallel computing development of the past 10 years.

Chaos Engineering is becoming a discipline in designing distributed systems in order to address the uncertainty of distributed systems at scale.

Chaos Engineering can be thought of as the facilitation of experiments to uncover systemic weaknesses.

These experiments follow four steps:

  1. Start by defining ‘steady state’ as some measurable output of a system that indicates normal behavior.
  2. Hypothesize that this steady-state will continue in both the control group and the experimental group.
  3. Introduce variables that reflect real-world events like servers that crash, hard drives that malfunction, network connections that fail, etc.
  4. Try to disprove the hypothesis by looking for a difference in steady-state between the control group and the experimental group.

In essence -> The harder it is to disrupt the steady-state, the more confidence we have in the behavior of the system. And If a weakness is uncovered, we now have a target for improvement before that behavior manifests in the system at large.

Chaos Engineering was called as such mainly through Netflix’s Chaos Monkey

Read More @ https://principlesofchaos.org/

Cloud Native Solution With the broad adoption of SOA and Micro Service Architectures and many IaaS, PaaS offerings one of the “lessons learned” is that you need to be able to harness your infrastructure and start treating it in a standardized way.

So What is Cloud Native? The Cloud Native Computing Foundation (CNCF) describes it as “distributed systems capable of scaling to tens of thousands of self-healing multi-tenant nodes”. That’s a “how” (distributed systems) and a “why” (high scalability and automated resilience).

All the above in a complex/polyglot world will only be possible with standards and tools to enable all these moving parts to suit small to large scale microservice-based applications - This is what the Cloud Native Movement and the CNCF will promote in the years to come.

The following principles drive Cloud Native solutions:

  • Treat your own /cloud/hybrid infrastructure-as-a-service: run on servers that can be flexibly provisioned on demand.

  • Microservices architecture: individual components are small, loosely coupled.

  • Automated deployments and continuously integrated and test -> replace manual tasks with scripts or code.

  • Containerize: package processes with their dependencies making them easy to test, move and deploy.

  • Orchestrate: use standard / commonly used / battle tested orchestration tools.

Cloud Service Providers As of today, the cloud service providers play a major role in all startup companies and even in big companies. This gets the programmers to the point where knowing Linux and ops utilities is not enough. Developers are expected to be experts in the different cloud providers in order to succeed with their tasks.

Each cloud provider comes with new platforms, techniques, API’s and databases.

Docker has revolutionized the world. However, a business can stale development, and we believe this year we’ll see less usage of Docker and more of other alternatives:

  • Distroless Docker Images (Google) - creates docker images with only the application (without the os)
  • rkt (CoreOS) - a pod-based implementation targeting Kubernetes
  • Makisu (Uber) - creates docker images without the need for a local demon
  • oci(Linux Foundation) - looks like the next standard, images can be written is several different formats, including docker

Debezium stream changes from your databases. Debezium is an open-source distributed platform for change data capture. Start it up, point it at your databases, and your apps can start responding to all of the inserts, updates, and deletes that other apps commit to your databases. Debezium is durable and fast, so your apps can respond quickly and never miss an event, even when things go wrong.

Why? Debezium would allow you to follow up on changes in your data, by monitoring your data quickly and reacting to it. It is an open-source tool with sponsorship by Redhat.

GO is A programming language introduced by Google in 2009. It is a compiled and strongly typed language similar to C, but bringing a much more intuitive syntax. Golang is basically a functional language, rather than a strict OOP, designed for high performance (as compiled to native machine code), without the bother to deal with thread synchronization.

Our Perspective is to define distinct aspects in which GoLang may give better performance than the ‘standard’ stack of Java / Python / NodeJS, in backend development.

Why?

As MSA is becoming a standard of writing complex applications as a bunch of small and rather simple services, maintained by small pods of teams of 3-4 persons, well-known programming paradigms such as OOP and classic design patterns are no longer a must. The code has become much simpler and a new programming language such as GO is becoming more and more relevant: Simple, Only the required features are supported, there is ONLY one way to implement a thing, and very high performance of a native application.

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

  • GraphQL queries always return predictable results.
  • While typical REST APIs require loading from multiple URLs, GraphQL APIs get all the data your app needs in a single request.
  • GraphQL uses types to ensure Apps only ask for what’s possible and provide clear and helpful errors.

We suggest starting using this tool when it applies as an alternative to REST.

gRPC (Google Remote Procedure Call) is an open-source remote procedure call (RPC) system initially developed at Google. It uses HTTP/2 for transport, Protocol Buffers as the interface description language, and provides features such as authentication, bidirectional streaming, and flow control, blocking or nonblocking bindings, cancellation and timeouts. It generates cross-platform client and server bindings for many languages (C++, Java, Python Node, etc).

Many of our clients show interest and integrate gRPC in their systems. The shared language-neutral Protobuf definition allows them to create all code for all languages automatically and helps with the interoperability of different systems.

Flow-Based Programming (FBP) is a paradigm in computer engineering in which business logic is described as a network of “logical building blocks”. The main benefit of FBP is mainly the fact that the business logic CAN be visually developed this way by personals which are no engineers and not aware of the specific technology in which these building blocks are written internally. The development is done usually by visual editors and tools.

NodeRed is an FBP development tool for both software and hardware application. It provides a browser-based visual editor and a Node.js based event-based, non-blocking framework to implement building blocks. Flows can be deployed directly from the editor in a single click.

It’s already been a while since the complex application is written as MSA as a standard in many organizations, R&D teams deal more and more in adapting existing systems to rapidly changing business logic. NodeRed gives organizations the ability to non-technological staff define and update business logic to a live system with zero downtime, without involving R&D teams, giving them enough time and capacity to handle platform and frameworks issues.

So what is a Microframework? A microframework is a minimalist web framework that is meant to be lightweight and fast, thus making the development of applications much easier and faster.

Here is a list of possible Microframeworks:

  • Sparkjava
  • Javalite
  • Dropwizard
  • Vert.x
  • light4j
  • Jooby

Why? With the micro-services architecture, building and running Java applications with a smaller footprint is essential for multi-instances deployment. In addition, these services no longer need the entire portfolio of features provided by a framework such as Spring and Jakarta (Java EE)

Coroutines are a Kotlin feature that converts async callbacks for long-running tasks, such as a database or network access, into sequential code. The Kotlin team defines coroutines as “lightweight threads”. They are sort of tasks that the actual threads can execute.

Why? If you use Kotlin then you need to understand this threading model It is simpler in terms of design and more efficient in terms of CPU and memory usage. It is the standard threading model used in the language.

KSQL is the streaming SQL engine for Apache Kafka®. It provides an easy-to-use yet powerful interactive SQL interface for stream processing on Kafka, without the need to write code in a programming language such as Java or Python. KSQL is scalable, elastic, fault-tolerant, and real-time. It supports a wide range of streaming operations, including data filtering, transformations, aggregations, joins, windowing, and sessionization.

We think we should start using it since it complements Kafka and Kafka Streams.

The Machine Learning Toolkit for Kubernetes Helps leverage the power of Kubernetes and the benefits of deploying products within a microservices framework It’s an open, community-driven project that makes it easier to manage a distributed machine learning deployment by placing components in the deployment pipeline such as the training, serving, monitoring and logging components into containers on the Kubernetes cluster. It’s supported by Google and used in Google AI platform

We are recommending Kubeflow because it makes the ML system portable and scalable (using the benefits of Kubernetes). It’s quite mature and opensource

Maxwell’s daemon is an application that reads MySQL binary logs and writes row updates as JSON to Kafka, Kinesis, or other streaming platforms. Maxwell has low operational overhead, requiring nothing but MySQL and a place to write to. Its common use cases include ETL, cache building/expiring, metrics collection, search indexing, and inter-service communication. Maxwell gives you some of the benefits of event sourcing without having to re-architect your entire platform.

Maxwell gives you some of the benefits of event sourcing without having to re-architect your entire platform

Presto is a meta-search engine. Its 2 killer features are:

  • Able to join from a multitude of different DBs (i.e. join a MySql table with a Hive table)
  • Uses a distributed architecture to run the query so it is able to query very BIG amounts DATA without the need to use workarounds such as map/reduce or spark to run SQL queries

Why? Because it is really nice to be able to query multiple databases and have the job run across several instances with nothing more than an SQL Query

A saga is a sequence of local transactions where each transaction updates data within a single service. In a microservice architecture, each service in a saga performs its own transaction and publishes an event. The other services listen to that event and perform the next local transaction. If one transaction fails for some reason, the saga also executes compensating transactions to undo the impact of the preceding transactions. It is a very well-known pattern for solving the problem of data consistency across micro-services. The first paper about it was published back in 1987 and has been a popular solution since then.

Any solution which needs to guarantee data consistency across micro-services should start implementing this pattern as it solves many problems.

We are thinking the sagas are preferable approach to implement distributed transactions in MSA than two pase commit. They are more robust and scalable and also working much better for long running transactions.

With the need to give up to date results on large quantities of data, the community is moving from batch processing to stream processing. When designing such a system, the lambda architecture needs to be revised. Issues such as process time vs event time need to be addressed from the beginning as part of the overall architecture. In addition, how to deal with late data needs to be part of the architecture and not an afterthought.

Why? More and more companies are moving from batch processing to streaming. The full impact of this architecture needs to be taken into account

Try

AI Managed Services or machine learning as a service (MLaaS) are various cloud-based platforms that cover most infrastructure issues such as data pre-processing, model training, model evaluation and serving The leading cloud MLaaS services Amazon, Microsoft Azure, Google Cloud AI, IBM Watson

We suggest you try AI Managed Services because they have all that you need to manage a machine learning project. Makes it easy for machine learning developers, data scientists, and data engineers to take their ML projects from ideation to production.

ArchUnit is a free, simple and extensible library for checking the architecture of your Java code using any plain Java unit test framework. That is, ArchUnit can check dependencies between packages and classes, layers and slices, check for cyclic dependencies and more. It does so by analyzing given Java bytecode, importing all classes into a Java code structure.

Why? Because it’s a really nice addition to our unit tests when we want to enforce a certain architecture

Despite the title, AutoML does not replace data scientist (at least at this stage) It can rid you of manual work related to model selection and tuning. Also, it can partially help with feature selection. Even automate whole pipeline selection. But it can not help with business understanding or indicate that training dataset was biased There are plenty of AutoML tools either offline and cloud platform based.

The two general use-cases we find AutoML beneficial for are:

  1. A data scientist can use it to automate daily tasks such as model selection and pipeline optimization
  2. Nonpro can use it at the initial stage of a project to test and decrease time to market

We believe AutoML is definitely worth a try because it makes data processing accessible to more developers. It’s simple, flexible, uses fewer resources to uphold the performance and cost-effective.

CockroachDB is a distributed SQL database developed by Cockroach Labs. It is of a new breed of databases; the so-called “NewSQL” type. CockroachDB is a transactional database engine on top of a high-performance distributed key-value store that fully supports ANSI-SQL and ACID transactions. It can be deployed and used on-premise, in the cloud, or on top of a container/scheduler solution. The beauty and simplicity of geo-distribution of the data enable for easy horizontal scale as your needs grow.

We recommend trying this interesting new DB as it looks very promising in solving many clustering, replication and HA issues in a simple manner.

In Machine Learning methodologies, Data Scientists may repeat the process of building a model in iterations, where they change the data-sets and the models that are generated. DVC - Data Version Control - is a framework to manage revisions of Datasets and the models. It uses git to store the model and the metadata of the dataset, and manage the revisions. At any point, you can go back to previous revisions of your ML process from the past.

Why? Because in the Data Engineering world, it is important to be able to manage revisions of your dataset, in order to run cycles of Machine Learning that evolve through time.

The Eclipse MicroProfile is a set of specifications regarding Enterprise Java for a micro-services architecture. By adhering to this spec one can promise application portability across multiple MicroProfile runtimes. The specifications include the following specs: JAX-RS 2.0, CDI 1.1, JSON-P 1.0, Configuration 1.1, Fault Tolerance, JWT, Metrics, and Health Check, and even more.

We recommend to try this set of specifications as it is minimalist in nature and designed for the micro-service architecture which is very popular today. Many micro-frameworks have adopted the Eclipse MicroProfile as the governing spec.

In this new agile world, many people question the role of architecture. And certainly the pre-planned architecture vision couldn’t fit in with modern dynamism. But there is another approach to architecture, one that embraces change in the agile manner. In this view architecture is a constant effort, one that works closely with programming so that architecture can react both to changing requirements but also to feedback from programming. It is called “Evolutionary Architecture”, to highlight that while changes are unpredictable, architecture can still move in a better direction.

We put it on our Try category, as it brings new principles to the architecture world: “Bring the Pain Forward” and “Last Responsible Moment”

Generative adversarial networks are a class of AI algorithms, used in unsupervised learning, implemented by two NNs competing with each other in a zero-sum game, one training does generate quality augmented data, while the other training to differentiate the real data from the fake data, creating an iterative process of improvement. While In practice, GANs are used mostly for data augmentation which is important for supervised learning tasks, there are many more possible uses for GANs in many other fields. Real-life applications of GANs include:

  • Fashion, art, and advertising
  • Video games
  • Science

One of the biggest problems in the field of data science is generating synthetic data. GANs are an interesting way to achieve it and thus gain popularity.

GraalVM is a Polyglot universal virtual machine for running applications written in JavaScript, Python, Ruby, R, JVM-based languages like Java, Scala, Kotlin, Clojure, and LLVM-based languages such as C and C++. It removes the isolation between programming languages and enables interoperability in a shared runtime.

It enables you to write polyglot applications and also to compile your application into a native image. Native images compiled with GraalVM ahead-of-time improve the startup time and reduce the memory footprint of JVM-based applications.

GraalVM is based on the Graal JIT compiler which is currently, as of Java 11, being used as an experimental JIT compiler for the JVM instead of the famous c2 JIT compiler, but only for those enabling this feature.

We highly recommend trying this incredible software as it is already used in production environments such as twitter and it looks very promising for the near future and beyond.

Jaeger is an open-source, distributed tracing system which was developed by Uber.

In a microservice architecture, in an environment where we have many intertwined microservices working together, it’s almost impossible to map the inter-dependencies of these services and understand the execution of a request unless you use a distributed log tracing system. The two most popular and practical tracing tools are Zipkin and Jaeger.

Jaeger addresses the following problems:

  • Distributed context propagation
  • Distributed transaction monitoring
  • Root cause analysis
  • Service dependency analysis
  • Performance / latency optimization

We recommend trying this technology since Jaeger is supported by the Cloud Native Computing Foundation (CNCF) as an incubating project. Furthermore, its preferred deployment method is actually Kubernetes if your deployment is based on Kubernetes this tool will fit in nicely.

Kotlin is a general-purpose, open-source, statically typed “pragmatic” programming language for the JVM that combines object-oriented and functional programming features. It is focused on interoperability, safety, clarity, and tooling support. It is fully interoperable with Java code and libraries and as such, it is beginning to be the language of choice for many JVM based projects.

Although it is widely used for mobile development, its adoption for a system backend language is still to be seen.

Kotlin is a great fit for developing server-side applications, allowing you to write concise and expressive code while maintaining full compatibility with existing Java-based technology stacks and a smooth learning curve Just like any other that is extensively used for building servers, Kotlin has its own of a set of frameworks designed to ease the implementation of servers:

Spring makes use of Kotlin’s language features to offer more concise APIs, starting with version 5.0. The online project generator allows you to quickly generate a new project in Kotlin.

Vert.x, a framework for building reactive Web applications on the JVM, offers dedicated support for Kotlin, including full documentation.

Ktor is a framework built by JetBrains for creating Web applications in Kotlin, making use of coroutines for high scalability and offering an easy-to-use and idiomatic API.

kotlinx.html is a DSL that can be used to build HTML in a Web application. It serves as an alternative to traditional templating systems such as JSP and FreeMarker.

We should try it because if you work with Kotlin on the backend there’s a good chance that you will want to leverage one of those.

Micronaut is a modern, JVM-based, full-stack framework for building modular, easily testable microservice and serverless applications. It provides better performance than traditional CDI frameworks by binding the application on compile-time.

Benchmarks show that Micronaut can deliver on a low footprint and better performance for micro-service applications, making it a very good candidate to replace the traditional frameworks for building micro-services.

MLflow is an open-source platform to manage the ML lifecycle, including experimentation, reproducibility, and deployment. Maintained and development by Databricks (Apache Spark maintainer), it offers the following components:

  • MLFlow Tracking - ability to record and query ML experiments
  • MLFlow Projects - packaging format for reproducible runs on any platform.
  • MLFlow Models - General format for sending models to diverse deployment tools

Mastering this tool in Tikal provides us with production-level capabilities we can offer to customers.

We recommend trying this tool as It’s mature, lightweight and easy to use

NATS is an incredibly fast open-source cloud-native messaging system written in the Go programming language. The core design principles of NATS are performance, scalability, and ease of use. NATS supports Pub/Sub, Scatter-Gather, but also Request-Reply Messaging Models.

It seems that there is a need in the industry for even more variety than what we have today in the messaging system’s domain. Kafka is very complex there is a need for a simple solution that is highly scalable and lightning fast. The added value of the Request-Reply model might be interesting as this is something lacking in most other messaging systems including popular solutions such as Kafka and RabbitMQ. This might open doors to as of yet unavailable communication patterns between micros services.

This tool is certainly not new and also appears in the Cloud Native Computing Foundation, but it has not yet conquered the masses and it has not much traction, but it looks very interesting and promising.

Prefect is an open-source automation and scheduling engine used for taking any code and transforming it into a robust, distributed workflow. It was developed in order to address some faults in the popular workflow Airflow. Airflow restricted the developers to stick to its very strict vocabulary.

Prefect addresses many features that were missing in Airflow, like:

  • DAGs which need to be run off-schedule or with no schedule at all
  • DAGs that run concurrently with the same start time
  • DAGs with complicated branching logic
  • DAGs with many fast tasks
  • DAGs which rely on the exchange of data
  • Parametrized DAGs
  • Dynamic DAGs

We should try Prefect because in order to run complex workflows in the Big Data world, there is a need for a concise framework that can be highly available on one side, and also easy to manage

Quarkus is an Open Source stack to write Java applications offering unparalleled startup time, memory footprint and developer experience. It offers familiar programming models and APIs (Hibernate, JAX-RS, Eclipse Vert.x, Apache Camel, Eclipse MicroProfile, Spring API compatibility and more).

It should be on our “Try”, as this framework brings JVM based languages back into microservices and cloud environments.

Reinforcement Learning is defined as a Machine Learning method that is concerned with how software agents should take actions in an environment. Reinforcement Learning is a part of the deep learning method that helps you to maximize some portion of the cumulative reward. Examples of applications using RL in the real world include:

  • Automatic Resources management in computer clusters
  • Traffic Light Control
  • Robotics
  • Personalized Recommendations
  • Bidding and Advertising
  • Games

Reinforcement Learning is a different approach to achieving an intelligent behavior of robots and agents. It gains increased popularity by organizations around the world.

Performance Rust is blazingly fast and memory-efficient: with no runtime or garbage collector, it can power performance-critical services, run on embedded devices, and easily integrate with other languages.

Reliability Rust’s rich type system and ownership model guarantee memory-safety and thread-safety — and enable you to eliminate many classes of bugs at compile-time.

Productivity Rust has great documentation, a friendly compiler with useful error messages, and top-notch tooling — an integrated package manager and build tool, smart multi-editor support with auto-completion and type inspections, an auto-formatter, and more.

Why Because of the confidence, one gains when writing a program in it. Rust’s very strict and pedantic compiler checks each and every variable you use and every memory address you reference. It may seem that it would prevent you from writing effective and expressive code, but surprisingly enough, it’s very much the reverse: writing an effective and idiomatic Rust program is actually easier than writing a potentially dangerous one

TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. TensorFlow Serving makes it easy to deploy new algorithms and experiments while keeping the same server architecture and APIs. TensorFlow Serving provides out-of-the-box integration with TensorFlow models but can be easily extended to serve other types of models and data.

This is different from the classic train/infer steps usually done in the backend, and we should “Try” this method.

Stop

Chef is a pool based configuration management tool. The pool paradigm enables the chef to manage very large infrastructure from a single server that contains a repository of “recipes” and “cookbooks” written in a ruby based DSL. The shift to container-based deployments (and orchestrators such as Kubernetes) reduced the need for server configuration management tools.

Jsonnet + Ksonnet

Ksonnet project is no longer being developed. see here

Bottom line the Ksonnet project has stopped the development and the git repo is archived

OpenStack is a free and open-source software platform for cloud computing, mostly deployed as infrastructure-as-a-service (IaaS), whereby virtual servers and other resources are made available to customers. The software platform consists of interrelated components that control diverse, multi-vendor hardware pools of processing, storage, and networking resources throughout a data center. Users either manage it through a web-based dashboard, through command-line tools or through RESTful web services.

On top of OpenStack core services, that provide the basic cloud functionality of managing to compute, storage and networking resources. OpenStack project contains a set of services (sometimes named ‘The big tenet’) that provide a lot of complementary functionality such as shared storage, databases, monitoring, orchestrating and much more.

Python is a powerful high-level, interpreted, open source, object-oriented programming language, portable, extensible and embeddable with simple syntax and large standard libraries to solve common tasks. Python is a general-purpose language. It has a wide range of applications from Web development (like Django and Bottle), scientific and mathematical computing (Orange, SymPy, NumPy) to desktop graphical user Interfaces (Pygame, Panda3D).

Why? Python 2.7 will not be maintained past 2020. Originally, there was no official date. Recently, that date has been updated to January 1, 2020. We suggest stopping using this version, especially for new projects.

It seems like the utilization of Ruby as a software development language which was very popular in the DevOps movement mainly with tools like Chef and Puppet Logstash Fluentd and a lot of scripting and utilities around ruby and ruby on rails application lifecycle such as Capistrano, has taken a punch in favor of python go and javascript. With the rise of popularity of these langs & frameworks, we see less and less use of ruby.

We assume that as there will always be something written in ruby but it will most definitely not be the language of choice when we are required to develop a utility / micro-app.

Subversion. The good old successor of the mythological CVS, represents the centralized source control tools in the OSS world. It shines in an environment of large repos and centralized management. In the age of microservices from one hand, and a good selection of GitHub-like solutions on the other. There is no point to keep using it.

Keep

AWS was the first provider to offer functions as a service already in November 2014. AWS initially released AWS lambda as an event-driven provisioning/operations and took just under 3 years to become the standard name of serverless and FaaS offerings.

AWS, as it’s competitors, offer Lambdas (a.k.a functions), as a complementary to their BaaS offerings, stitching together services such as:

  • Incognito
  • S3
  • CloudFormation
  • DynamoDB
  • RDS and many more

These integrations alongside Lambda’s “infinite” scalability and it’s newly introduced (at the time) “Price per 100ms” made it very popular among both startups achieving their MVP and enterprises wishing to scale out or experiment with Serverless and Micro Services Architectures.

AWS Lambda provides many organizations the ability to write functions in a variety of programming languages and integrates well with many frameworks and other IaaS/PaaS/BaaS services.

More about AWS Lamda here

Being a major part of “Serverless” architecture this should also be part of our “Keep” advise, as Lambda provides a great way to cost-effective deploy new service on the cloud.

Clair (French term - clear, bright, transparent) Static analysis of vulnerabilities in application containers https://github.com/coreos/clair

This tool is mainly used to reveal vulnerabilities in containers. In regular intervals, Clair ingests vulnerability metadata from a configured set of sources and stores it in the database. Clients use the Clair API to index their container images; this creates a list of features present in the image and stores them in the database. Clients use the Clair API to query the database for vulnerabilities of a particular image; correlating vulnerabilities and features is done for each request, avoiding the need to rescan images. When updates to vulnerability metadata occur, a notification can be sent to alert systems that a change has occurred.

Clair usage is easy to integrate into CI Flow and is open source.

AWS was the first provider to offer functions as a service already in Nov 2014. AWS initially released AWS lambda as an event-driven provisioning/operations and took just under 3 years to become the standard name of serverless and FaaS offerings.

AWS, as it’s competitors, offer Lambdas (a.k.a functions), as a complementary to their BaaS offerings, stitching together services such as:

  • Incognito
  • S3
  • CloudFormation
  • DynamoDB
  • RDS and many more

These integrations alongside Lambda’s “infinite” scalability and it’s newly introduced (at the time) “Price per 100ms” made it very popular among both startups achieving their MVP and enterprises wishing to scale out or experiment with Serverless and Micro Services Architectures.

AWS Lambda provides many organizations the ability to write functions in a variety of Software languages and integrates well with many frameworks and other IaaS/PaaS/BaaS services.

More about AWS Lamda here

Google functions enable Serverless applications based on GCP IaaS offerings.

Cloud Functions lets application developers spin up code on demand in response to events originating from any API / HTTP request. Serverless architectures utilizing Google Functions integrated with Google Endpoints and BaaS services could build applications that scale from zero to infinity, on-demand - without provisioning or managing a single server.

As other serverless and function providers, google’s functions are the best fitted for Backend services such as Firebase, Cloud Datastore, and ML solutions also offered by the GCP.

More info on google functions in the following link

Cloud Native Solution With the broad adoption of SOA and Micro Service Architectures and many IaaS, PaaS offerings one of the “lessons learned” is that you need to be able to harness your infrastructure and start treating it in a standardized way.

So What is Cloud Native? The Cloud Native Computing Foundation (CNCF) describes it as “distributed systems capable of scaling to tens of thousands of self healing multi-tenant nodes”. That’s a “how” (distributed systems) and a “why” (high scaleability and automated resilience).

All the above in a complex/polyglot world will only be possible with standards and tooling to enable all these moving parts to suit small to large scale microservice based applications - This is what the Cloud Native Movement and the CNCF will promote in the years to come.

The following principles drive Cloud Native solutions:

  • Treat your own / cloud / hybrid infrastructure as-a-service: run on servers that can be flexibly provisioned on demand.

  • Microservices architecture: individual components are small, loosely coupled.

  • Automated deployments and continuously integrated and test -> replace manual tasks with scripts or code.

  • Containerize: package processes with their dependencies making them easy to test, move and deploy.

  • Orchestrate: use standard / commonly used / battle tested orchestration tools.

Cloud Service Providers As of today, the cloud service providers play a major role in all startup companies and even in big companies. This gets the programmers to the point where knowing Linux and ops utilities is not enough. Developers are expected to be experts in the different cloud providers in order to succeed with their tasks.

Each cloud provider comes with new platforms, techniques, API’s and databases.

Template your app’s configuration with confd confd is a lightweight configuration management tool focused on:

  • keeping local configuration files up-to-date using data stored in remote values providers and processing template resources.
  • reloading applications to pick up new config file changes

Purpose Detach environment related configuration management from application code. This means that the app source repo will contain configuration files templates that fill be used to generate config files using external values providers at deployment time. confd supports the following value stores:

  • etcd
  • consul
  • dynamodb
  • redis
  • vault
  • zookeeper
  • aws ssm parameter store
  • system environment variables
  • yaml files

This tool can:

  1. Create environment-related config files at the deployment stage
  2. Can watch for changes in “value store”, update configuration and perform an action (reload the config, restart the app, etc.)

It’s written in golang and all it’s templating is based on go-templates.

I know it’s old and no new versions were issued last year but it’s still might be relevant…. just keep it in mind…

Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Docker Compose is used for running multi-container applications on a single machine or on multiple machines with Swarm mostly in Dev and QA environments, and in some places in Production as well.

The cluster management and orchestration features embedded in the Docker Engine

Continue Using it

  • We identify a large client base in the process of either living within the limits of the swarm or moving to Kubernetes
  • The Docker platform is getting support for Kubernetes. This means that developers and operators can build apps with Docker and seamlessly test and deploy them using both Docker Swarm and Kubernetes.
  • looks like in most of the case when you going to production with container cluster it’s will be either swarm or k8s

“Everything as Code” is the practice of managing all the parts of the application In SCM and automate the execution of it, Include configuration, Infrastructure, security. This practice gives us:

  • Traceability: All in SCM
  • Repeatable: We can recreate every part
  • Testability: We can test the change and the infra creation

As I see it, this practice is necessary for every product we develop these days.

Gradle is a build tool. Following a build-by-convention approach, Gradle allows for declaratively modeling your problem domain using a powerful and expressive domain-specific language (DSL) implemented in Groovy instead of XML. Because Gradle is a JVM native, it allows you to write custom logic in the language you’re most comfortable with, be it Java or Groovy. it too provides powerful dependency management.

Although still many more developers use Maven (~60%) Gradle is a powerful contender/ally which needs to be considered when making the decision on which build tool to use.

The Android Studio build system is based on Gradle, and the Android Gradle plugin adds several features that are specific to building Android apps.

Grafana’s motto is “The analytics platform for all your metrics”, in the past ~5 years or so Granfana has done just that, becoming a centralized hub of data processing and visualization. Grafana has a big Datasource variety supported, enabling processing and real-time visualization of time series data originating from many data sources simultaneously. From traditional databases such as MySql and Postgresql and to time series databases such as OpenTSDB, Graphite, Prometheus, Elasticsearch, and others.

Grafana also provides an extensible plugin interface adding the ability to enrich both visualizations with plugins or custom “data sources”.

The Suger Coating of Grafana is the Grafana community website, which maintains a centralized hub for hosting plugins and dashboard which the community can download & manage via source control. In many cases, there is either a ready dashboard for your use case or at least a good starting point.

Grafana also supports an HA installation method, authentication [Basic/LDAP/OAuth/etc] & authorization schemes and organization management.

Read more

Apache Groovy is an object-oriented programming language for the Java platform. It is a dynamic language with features similar to those of Python, Ruby, Perl, and Smalltalk. It can be used as a scripting language for the Java Platform, is dynamically compiled to Java virtual machine (JVM) bytecode, and interoperates with other Java code and libraries. Groovy uses a Java-like curly-bracket syntax. Most Java code is also syntactically valid Groovy, although semantics may be different.

We use Groovy mostly for programming Jenkins-code (Jenkinsfile and Jenkins shared libraries) which are Groovy-based.

Helm, short for Helm Charts which help manage Kubernetes applications — Helm Charts helps you define, install, and upgrade even the most complex Kubernetes applications.

Charts are easy to create, version, share, and publish. There are Helm Registry offerings from both Quay.io and Artifactory. HELM helps to manage Services, Deployment, Secrets and all parts related to Kubernetes ready applications. **Umbrella chart **, To install a complex application with helm — There is something called umbrella chart Umbrella chart is a chart with dependency (requirements.yaml) on other charts. and value.yaml (override charts values) To install umbrella-chart you run “helm dep update” will download depended on chart , then “helm install” or “helm upgrade” will install the charts

The Umbrella pattern allows you to deploy assembly of multiple microservice has one version and use it as a kind of release to install in different datacenters.

The NGINX controller is built around the Kubernetes Ingress resource that uses ConfigMaps to store the NGINX configuration.

NGINX is in the route path of many applications, which all vary in needs. From basic ones like routing (a.k.a proxying) and authentication to advanced services like load balancing SSL termination and integrations with other projects like cert-manager (Kube-lego) and many more.

NGINX started off as the faster Apache alternative and it seems to the most popular ingress used out there among our clientele, and gaining popularity due to its seamless integration with Kubernetes.

Keybase has become for me one of the central tools, it enables me to securely correspond with my peers share files and even entire git repositories if needed. One main benefit of using keybase is your very own hosted free pgp server so all your public certs are stored on keybase servers which means using pgp encryption and sharing encrypted data (even outside of keybase) has become a breeze. In addition, the keybase cli enables you to run clean code without worrying about storing sensitive data in it.

We are now checking how a ci bot user could be part of the keybase network which could potentially provide storing secure build artifacts that are discoverable only by users that follow that bot user (gpg wise). With this ci capability signing git commits, decrypting secrets in build time has just become much faster.

The alternative solutions are much more expensive to operate not only in money … Managing servers for solutions such as Vault which of course could provide this type of solution in addition to other features of course. (and I also found a reference by hasicorp recommending th use of keybase for the initial password / cert of Vault).

Kubernetes is an open-source system for automating deployment, scaling and management of containerized applications in a cluster.

Kubernetes uses a set of APIs and YAML/JSON based configuration language to define and maintain all aspects of containers running on a cluster. Including networking, service discovery, proxying and load balancing, horizontal and vertical scaling, security and more. Kubernetes as a service is part of all major cloud providers offering and there are some projects that can deploy Kubernetes cluster automatically on almost every computing environment. Kubernetes introduce the concept of a POD - a set of one or more containers that deployed as a single unit. (Same node, same namespace, and same network configuration) PODs can be thought of as lightweight servers constructs from container images. PODs can be deployed using controllers that define the behavior of the POD in the cluster. Commonly used controllers are the ‘Deployment’ controller that defines a replica-set to make sure a given number of POD instances will be available at any moment of time. And ‘DaemonSet’ controller that deploys one POD per running worker node in the cluster. Services, running as PODs can be exposed, internally to the cluster or externally to the world, via ‘Service’ configuration object that acts as a reverse proxy and simple load balancer that provide a single endpoint for the service. All configuration objects (PODS, Controllers, Services, etc…) are loosely coupled via tags and selectors that make the infrastructure flexible and configurable.

The Operator Pattern A Kubernetes Operator is simply a domain specific controller that can manage, configure and automate the lifecycle of stateful applications. Managing stateful applications, such as databases, caches and monitoring systems running on Kubernetes is notoriously difficult. By leveraging the power of Kubernetes API we can now build self managing, self driving infrastructure by encoding operational knowledge and best practices directly into code. For instance, if a MySQL instance dies, we can use an Operator to react and take the appropriate action to bring the system back online.

As an asynchronous event-driven JavaScript runtime, Node.js is designed to build scalable network applications. This is in contrast to today’s more common concurrency model, in which OS threads are employed. Thread-based networking is relatively inefficient and very difficult to use. Furthermore, users of Node.js are free from worries of dead-locking the process, since there are no locks. Almost no function in Node.js directly performs I/O, so the process never blocks. Because nothing blocks, scalable systems are very reasonable to develop in Node.js.

We use Nodej.s in DevOps -

  • Write small tools or in serverless framework, lambda tasks (gluing processes)
  • But I wouldn’t say it’s the main language in “DevOps domain”

Polyglot programming is the practice of writing code in multiple languages to capture additional functionality and efficiency not available in a single language. The use of domain-specific languages (DSLs) has become a standard practice for enterprise application development. For example, a mobile development team might employ Java, JavaScript and HTML5 to create a fully functional application. Other DSLs such as SQL (for data queries), XML (embedded configuration) and CSS (document formatting) are often built into enterprise applications as well. One developer may be proficient in multiple languages, or a team with varying language skills may work together to perform polyglot programming.

Why? With the architecture of micro-services and platforms such as GraalVM, it is becoming more and more rewarding to choose the right language for the right service.

Prometheus is an open-source system monitoring and alerting toolkit originally built at SoundCloud. Prometheus uses a pull model to scrape endpoint which exposes metric data in form of HTTP/s via standardized exporters such as JMX, MySql, advisor, and node_exporter or via Prometheus client libraries implementations within common frameworks and SW languages.

Prometheus consists of a centralized server (written in go) which implements:

  1. A time series database,
  2. An endpoint collection mechanism based on common service discovery providers varying from all common cloud provider API’s, service discovery services such as etcd / consul and of course docker and Kubernetes and even more
  3. A web interface that also provides the PromQL interface for the time series data queries.

Additional (Optional) components:

  1. push gateway - a server which provides monitoring capabilities for short-term/stateless services to push metrics (over the default pull method)
  2. node-exporter - the official os level metric exporter
  3. JMX exporter - java application monitoring + common JVM related metrics
  4. Client libraries for all common languages nodejs, go, java, python …
  5. many many more

Prometheus is part of CNF and the most common monitor framework in Kubernetes world.

Python is a powerful high-level, interpreted, open-source, object-oriented programming language, portable, extensible and embeddable with simple syntax and large standard libraries to solve common tasks. Python is a general-purpose language. It has a wide range of applications from Web development (like Django and Bottle), scientific and mathematical computing (Orange, SymPy, NumPy) to desktop graphical user Interfaces (Pygame, Panda3D).

This language is one of the tops used languages today and its benefits are well known, especially in the data engineering industry and AI. We suggest you keep using this language for its intended purposes

Jenkins is widely recognized as the de-facto standard solution for implementing CI and even CD, but there are other leading alternatives:

  • Travis - the GitHub hosted CI
  • CircleCI - a free cloud-based system (it also have an on-premise option)
  • Gitlab-CI - part of GitLab platform
  • TeamCity - JetBrain CI/CD server

SDS - software design storage/Persistence Workload With the growing usage of containers and container orchestration platforms such as Kubernetes, it’s a good time to rethink storage completely. When we started with Docker and Containers Persistence and Storage was out of scope. As applications now consist of volatile sets of loosely coupled microservices running in containers, ever-changing in scale, constantly being updated and re-deployed, and always evolving, the traditional mindset of serving storage and data services has changed.

Kubernetes paved the way to enable these types of applications by making it inexpensive and manageable to run the hundreds of container instances that make up an application. Likewise, software-defined storage (SDS) made running dozens of systems that make up a highly elastic storage system serving hundreds of volumes/LUNs to many clients viable and manageable. Now is a perfect time to combine these two systems using Persistent Volumes and Claims and of course the rise of Operators such as Rook which introduce Software Defined Storage in the form of an Operator.

With the growing list of vendors providing “vault”-like services from Hashicorp’s Atlas offering to AWS’s secret manager to GKE’s cloud KMS and project like Vault which have an Operator which provides “vaults” to deployments makes this trend a direct continuation of how Security / No passwords in Git trend strengthens.

Serverless computing is a cloud computing execution model in which the cloud provider dynamically manages the allocation of machine resources. Serverless is a form of utility computing that helps provide complex infrastructure and security requirements as a part of managed services these services are managed by the provider hence the name serverless { less management of servers }. Serverless should not be confused with the serverless framework …

Serverless framework Serverless.com is a toolkit for deploying and operating serverless architectures, serverless utilizes the different cloud provider’s functions and API-gw API’s to expose backend services.

Highlights:

  • Integrates with main cloud providers - AWS, GCP, Azure
  • Has Kubeless plugin
  • Yaml definition of both API gateway and lambdas
  • Utility to test functions locally using a local docker daemon

We should put it on “Keep”, as many customers work with this Succesful architecture trend, which provides cost-effective architecture from both technical and business levels.

Spinnaker is an open-source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence.

Spinnaker has been battle-tested in production by hundreds of teams over millions of deployments. It combines a powerful and flexible pipeline management system with integrations to the major cloud providers.

It can be deployed across multiple cloud providers including AWS EC2, Kubernetes, Google Compute Engine, Google Kubernetes Engine, Google App Engine, Microsoft Azure, and Openstack, with Oracle Bare Metal and DC/OS coming soon.

Terraform is a provisioning tool (as opposed to a Configuration Management tool), working with Immutable Infrastructures, using a declarative language, is masterless and agentless

Being a popular tool, from a well established vendor, HashiCorp we should certainly keep offering Terraform as an alternative to our customers

Terraform has recently announced the support of Kubernetes, going to show that HashiCorp is on its feet and adapting to changes

TICK (Telegraf InfluxDB Chronograf Kapacitor) Modern Time Series Platform, designed from the ground up to handle metrics and events. InfluxData’s products are based on an open source core. This open source core consists of the projects—Telegraf, InfluxDB, Chronograf, and Kapacitor; collectively called the TICK Stack.

Telegraf is a plugin-driven server agent for collecting and reporting metrics.

  • Telegraf has plugins or integrations to source a variety of metrics directly from the system it’s running on, to pull metrics from third party APIs, or even to listen for metrics via a StatsD and Kafka consumer services. It also has output plugins to send metrics to a variety of other datastores, services, and message queues, including InfluxDB, Graphite, OpenTSDB, Datadog, Librato, Kafka, MQTT, NSQ, and many others.

InfluxDB - Is a Time Series Database built from the ground up to handle high write & query loads.

  • InfluxDB is a custom high performance datastore written specifically for timestamped data, including DevOps monitoring, application metrics, IoT sensor data, and real-time analytics. Conserve space on your machine by configuring InfluxDB to keep data for a defined length of time, and automatically expiring and deleting any unwanted data from the system. - InfluxDB also offers an SQL-like query language for interacting with data.

Chronograf is the administrative user interface and visualization engine of the platform.

  • It makes the monitoring and alerting for your infrastructure easy to setup and maintain. It is simple to use and includes templates and libraries to allow you to rapidly build dashboards with real-time visualizations of your data and to easily create alerting and automation rules.

Kapacitor is a native data processing engine. It can process both stream and batch data from InfluxDB.

  • Kapacitor lets you plug in your own custom logic or user-defined functions to process alerts with dynamic thresholds, match metrics for patterns, compute statistical anomalies and perform specific actions based on these alerts like dynamic load rebalancing. Kapacitor integrates with HipChat, OpsGenie, Alerta, Sensu, PagerDuty, Slack, and more.

TICK Stack is definitely a good monitor tool.and have an enterprise alternative.

A Time Series Database (TSDB) is a database optimized for time-stamped or time series data. Time series are simply measurements or events that are tracked, monitored, downsampled, and aggregated over time. This could be server metrics, application performance monitoring, network data, sensor data, events, clicks, trades in a market, and many other types of analytics data. The key difference with time series data from regular data is that you’re always asking questions about it over time.

There are quite a few popular Open Source Time Series Databases which are widely used some of them are listed below:

  1. DalmatinerDB
  2. InfluxDB
  3. Prometheus
  4. Riak TS
  5. OpenTSDB
  6. KairosDB
  7. Elasticsearch
  8. Druid
  9. Blueflood
  10. Graphite (Whisper)

A great source to read here

Hasicorp Vault is one of the tools which are definitely a keep from “Secrets aaS” trend which varies from cloud platforms like AWS/GCP to self-hosted / As a Secrets/Tokens Vault.

Apache Zeppelin is a notebook for managing and visualizing Data and Data-lake. With Apache Zeppelin, one can perform Data Ingestion, Data Discovery, Data Analytics and also have Data Visualization & Collaboration. Apache Zeppelin is a Java runtime and supports the following languages and frameworks: Scala as Apache Spark, Python, JDBC, Markdown and Shell. It also supports many plugins The results of the queries can be shown in nice built-in visuals like graphs, pie charts and more.

Apache Zeppelin supports many additional interpreters that can be loaded or integrated using Helium, Among those interpreters are for example Cassandra, Elasticsearch, Flink, Ignite, bean, angular and many more

Why? Zeppelin is still the best tool out there to experiment in Spark, and as such it makes Spark accessible for Data Scientist (via PySpark).

Start

Archery is an opensource vulnerability assessment and management tool which helps developers and pen-testers to perform scans and manage vulnerabilities.

Although tool scanners, linters and other Quality Assurance techniques which were automated throughout the years, we do see a paradigm shift in terms of what may commonly be called the “Security First Approach” (Salo Shp), which indicates a change in how Security has climbed the ladder from the least important task or most tedious to the highest or most important one. And with the “mass production” of Microservices, we will most likely see more and more solutions that enable automation and streamlining of security processes and regulation down the Production Pipeline.

Archery uses popular opensource tools to perform comprehensive scanning for web applications and networks. It also performs web application dynamic authenticated scanning and covers the whole application by using selenium.

These types of tooling are a great complementary solution to many other security disciplines such as IAM or secret & token management, key rotations & policies, mTls, identity management and many other security-related concerns which nowadays have become more a necessity than a luxury.

GitOps is a way to do Kubernetes cluster management and application delivery. It works by using Git as a single source of truth for declarative infrastructure and applications. With Git at the center of your delivery pipelines, developers can make pull requests to accelerate and simplify application deployments and operations tasks to Kubernetes.

The GitOpts is a good pattern to manage deployments and also adopted by ArgoCd.

Argo CD- Declarative GitOps CD for Kubernetes

This tool simplifies Apps deployment on K8S clusters. Its installation is easy, based on CDRs, it has a pretty fast learning curve to start using it in PROD. ArgoCD lives in your K8S cluster and can manage Apps deployment on a local and REMOTE cluster with no additional tools to be installed.

How it works Once installed, it tracks GIT repo with App deployment packages/manifests, it supports:

  • Kustomize
  • helm
  • Ksonnet
  • Jsonnet

Detects whether local/remote cluster manifests are synced with GIT repo (by pooling GIT repo), in case there is a diff (image version, replica set changed, annotation added, etc.) - it will deploy the changes to the cluster.

In addition, it validates whether the entire app is healthy or not. In case someone somehow changed the app’s configuration on the cluster - the tool will restore its original state according to GIT repo. Thus, any change in-app configuration, version or setup, should be managed in GIT.

What else?

  1. Great documentation with examples
  2. Monitoring metrics exposed in Prometheus format
  3. Roles and Policies management - in addition to 2 built-in roles (admin and read-only) there is an option to create your own!
  4. SSO integration using DEX (OpenID, Google, GitHub, Okta, etc.)
  5. Built-in backup/restore tooling.

In addition to the CLI tool, it has very nice web UI which allows the same management possibilities.**

Chaos Engineering is becoming a discipline in designing distributed systems in order to address the uncertainty of distributed systems at scale.

Chaos Engineering can be thought of as the facilitation of experiments to uncover systemic weaknesses.

These experiments follow four steps:

  1. Start by defining ‘steady state’ as some measurable output of a system that indicates normal behavior.
  2. Hypothesize that this steady-state will continue in both the control group and the experimental group.
  3. Introduce variables that reflect real-world events like servers that crash, hard drives that malfunction, network connections that fail, etc.
  4. Try to disprove the hypothesis by looking for a difference in steady-state between the control group and the experimental group.

In essence -> The harder it is to disrupt the steady-state, the more confidence we have in the behavior of the system. And If a weakness is uncovered, we now have a target for improvement before that behavior manifests in the system at large.

Chaos Engineering was called as such mainly through Netflix’s Chaos Monkey

Read More @ https://principlesofchaos.org/

A Short History of the Chaostoolkit “We were trying to define a common, declarative API for chaos engineering”, this tool is an example of how “planning to fail” works in distributed / cloud-native / microservice architecture based products/solutions.

Tools like this become part of your development process, stress testing your application isn’t a luxury, getting the accuracy of resilience & scalability is a must with the growing demand of elastic/scalable infrastructure.

Cloud Native Solution With the broad adoption of SOA and Micro Service Architectures and many IaaS, PaaS offerings one of the “lessons learned” is that you need to be able to harness your infrastructure and start treating it in a standardized way.

So What is Cloud Native? The Cloud Native Computing Foundation (CNCF) describes it as “distributed systems capable of scaling to tens of thousands of self-healing multi-tenant nodes”. That’s a “how” (distributed systems) and a “why” (high scalability and automated resilience).

All the above in a complex/polyglot world will only be possible with standards and tools to enable all these moving parts to suit small to large scale microservice-based applications - This is what the Cloud Native Movement and the CNCF will promote in the years to come.

The following principles drive Cloud Native solutions:

  • Treat your own /cloud/hybrid infrastructure-as-a-service: run on servers that can be flexibly provisioned on demand.

  • Microservices architecture: individual components are small, loosely coupled.

  • Automated deployments and continuously integrated and test -> replace manual tasks with scripts or code.

  • Containerize: package processes with their dependencies making them easy to test, move and deploy.

  • Orchestrate: use standard / commonly used / battle tested orchestration tools.

Cloud Service Providers As of today, the cloud service providers play a major role in all startup companies and even in big companies. This gets the programmers to the point where knowing Linux and ops utilities is not enough. Developers are expected to be experts in the different cloud providers in order to succeed with their tasks.

Each cloud provider comes with new platforms, techniques, API’s and databases.

Docker has revolutionized the world. However, a business can stale development, and we believe this year we’ll see less usage of Docker and more of other alternatives:

  • Distroless Docker Images (Google) - creates docker images with only the application (without the os)
  • rkt (CoreOS) - a pod-based implementation targeting Kubernetes
  • Makisu (Uber) - creates docker images without the need for a local demon
  • oci(Linux Foundation) - looks like the next standard, images can be written is several different formats, including docker

Dex is an identity service that uses OpenID Connect to drive authentication for other apps.

Dex acts as a portal to other identity providers through “connectors.” This lets dex defer authentication to LDAP servers, SAML providers, or established identity providers like GitHub, Google, and Active Directory. Clients write their authentication logic once to talk to dex, then dex handles the protocols for a given backend.

When you want to add user management to Kubernetes Dex can be the integration to external authentication providers.

GO is A programming language introduced by Google in 2009. It is a compiled and strongly typed language similar to C, but bringing a much more intuitive syntax. Golang is basically a functional language, rather than a strict OOP, designed for high performance (as compiled to native machine code), without the bother to deal with thread synchronization.

Our Perspective is to define distinct aspects in which GoLang may give better performance than the ‘standard’ stack of Java / Python / NodeJS, in backend development.

Why?

As MSA is becoming a standard of writing complex applications as a bunch of small and rather simple services, maintained by small pods of teams of 3-4 persons, well-known programming paradigms such as OOP and classic design patterns are no longer a must. The code has become much simpler and a new programming language such as GO is becoming more and more relevant: Simple, Only the required features are supported, there is ONLY one way to implement a thing, and very high performance of a native application.

gRPC (Google Remote Procedure Call) is an open-source remote procedure call (RPC) system initially developed at Google. It uses HTTP/2 for transport, Protocol Buffers as the interface description language, and provides features such as authentication, bidirectional streaming, and flow control, blocking or nonblocking bindings, cancellation and timeouts. It generates cross-platform client and server bindings for many languages (C++, Java, Python Node, etc).

Many of our clients show interest and integrate gRPC in their systems. The shared language-neutral Protobuf definition allows them to create all code for all languages automatically and helps with the interoperability of different systems.

Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices - implementing all the technics and methodologies mentioned above (which used to be part of the Backend).

The Machine Learning Toolkit for Kubernetes Helps leverage the power of Kubernetes and the benefits of deploying products within a microservices framework It’s an open, community-driven project that makes it easier to manage a distributed machine learning deployment by placing components in the deployment pipeline such as the training, serving, monitoring and logging components into containers on the Kubernetes cluster. It’s supported by Google and used in Google AI platform

We are recommending Kubeflow because it makes the ML system portable and scalable (using the benefits of Kubernetes). It’s quite mature and opensource

Kustomize is a tool to Customize K8S resources manifests prior to deployment. It supports:

  • Templating
  • Parametrization

We’re deploying our apps using GitOps approach, developers can easily prepare K8S customizations and deploy their apps. It’s a helm alternative tool but less complex.

Advantaged:

  • No installation on K8S cluster required (No tiller)
  • K8S native - part of kubectl since 1.14.x
  • Parametrization using K8S manifests, no special DSL required

There are no dependencies or apps priorities specifications like we do have in Helm, all apps/resources are being deployed at once, so they have to be developed in pure k8s native manner.

This means that if some app uses external DB (e.g. MySQL) which is deployed in the same template as another deployment resource, it has no fail if the DB is not ready yet. It has to implement proper liveness and readiness probes, so the app will be “live” when it’s starts and “ready” when it can connect the DB.

LocalStack - A fully functional local AWS cloud stack - LocalStack provides an easy-to-use test/mocking framework for developing Cloud applications.

Currently, the focus is primarily on supporting the AWS cloud stack. The tool has good support for Docker and enables developers to test services such as API Gateway, Kinesis & DynamoDB from the comfort of their laptop.

Kubernetes dominance and powerful abstraction are making everyday infrastructure operations simpler, many ingress-controllers are extremely efficient with automating their configurations in perfect alignment with Kubernetes’ hard-working standardization. Molding these new technologies with old practices and current requirements might be tricky.

Multi-ingress is a technique that allows you to create segregated network gateways and can be combined with smart technologies to create your own policy-based network routing.

Istio has Gateway objects, and nginx-ingress-controller adheres to class-names. Using these methods, ingress object (or virtual services) are segregated and can be used in different network architecture like internal/external and compliancy confined networks.

AWS was the first provider to offer functions as a service already in Nov 2014. AWS initially released AWS lambda as an event-driven provisioning/operations and took just under 3 years to become the standard name of serverless and FaaS offerings.

AWS, as it’s competitors, offer Lambdas (a.k.a functions), as a complementary to their BaaS offerings, stitching together services such as:

  • Incognito
  • S3
  • CloudFormation
  • DynamoDB
  • RDS and many more

These integrations alongside Lambda’s “infinite” scalability and it’s newly introduced (at the time) “Price per 100ms” made it very popular among both startups achieving their MVP and enterprises wishing to scale out or experiment with Serverless and Micro Services Architectures.

AWS Lambda provides many organizations the ability to write functions in a variety of Software languages and integrates well with many frameworks and other IaaS/PaaS/BaaS services.

More about AWS Lamda here

Google functions enable Serverless applications based on GCP IaaS offerings.

Cloud Functions lets application developers spin up code on demand in response to events originating from any API / HTTP request. Serverless architectures utilizing Google Functions integrated with Google Endpoints and BaaS services could build applications that scale from zero to infinity, on-demand - without provisioning or managing a single server.

As other serverless and function providers, google’s functions are the best fitted for Backend services such as Firebase, Cloud Datastore, and ML solutions also offered by the GCP.

More info on google functions in the following link

opensource : knative https://knative.dev/docs/

OpenFaaS https://blog.alexellis.io/introducing-functions-as-a-service/

As the growth in popularity of Functions in General and Serverless architecture in specific, varying from Cloud Native/Cloud-based to On-Prem solutions. Serverless architecture and especially the “Function part” has become the go-to tool for many DevOps processes that need automating and in many cases act as the binding context of the by design “loosely coupled” architecture.

As we grow in event-driven processes we will see more and more functions listening on event streams such as SNS on AWS, or NATS, Kafka, etc on tools like OpenFaaS,Knative. There tools/frameworks enable many DevOps teams to achieve their goals in a consistent, monitored and repetitive way, not to mention the replacement of many utilities such as CronJobs, etc.

To clarify, the term service mesh is used to describe the network of microservices that make up applications and the interactions between them. As a service mesh grows in size and complexity, it can become harder to understand and manage. Its requirements can include discovery __, __load balancing, failure recovery, __metrics __, and __monitoring __.

A service mesh also often has more complex operational requirements, like __A/B testing __, __canary releases __, __rate limiting __, __access control __, and __end-to-end authentication __.

The Service Mesh, in general, is the enabler of monolith apps to co-exist alongside one another until a full transition to microservices is done.

There are quite a few service mesh solutions, the CNCF has both Istio and Linkerd as service meshes the following table highlights the principles of the service mesh comparing specifically these two:

Many of the features above are used to achieve all the above techniques using the operator pattern which accompanied by a controller utilized annotations and CRD’s (Custome Resource Definitions) to manage service meshes @scale such as the following market leaders which utilize service meshes to enable smooth day1 and day2 operations.

Try

Berglas is a command-line tool and library for storing and retrieving secrets on Google Cloud. Secrets are encrypted with Cloud KMS and stored in Cloud Storage.

As a CLI, berglas automates the process of encrypting, decrypting, and storing data on Google Cloud.

As a library, berglas automates the inclusion of secrets into various Google Cloud runtimes

Draft makes it easy to build applications that run on Kubernetes. Draft targets the “inner loop” of a developer’s workflow: as they hack on code, but before code is committed to version control.

  • draft create to containerize your app based on Draft packs.
  • draft up to deploy your application to a Kubernetes dev
 sandbox, accessible via a public URL.
  • Use a local editor to modify the application, with changes 
 deployed to Kubernetes in seconds.

In this new agile world, many people question the role of architecture. And certainly the pre-planned architecture vision couldn’t fit in with modern dynamism. But there is another approach to architecture, one that embraces change in the agile manner. In this view architecture is a constant effort, one that works closely with programming so that architecture can react both to changing requirements but also to feedback from programming. It is called “Evolutionary Architecture”, to highlight that while changes are unpredictable, architecture can still move in a better direction.

We put it on our Try category, as it brings new principles to the architecture world: “Bring the Pain Forward” and “Last Responsible Moment”

Gaia is an open-source automation platform that makes it easy and fun to build powerful pipelines in any programming language. Based on HashiCorp’s go-plugin and gRPC, gaia is efficient, fast, lightweight, and developer-friendly. Gaia is currently alpha!

Pipelines can be compiled locally or simply over the build system. Gaia clones the git repository and automatically builds the included pipeline. If a change (git push) happened, Gaia will automatically rebuild the pipeline for you*.

Gaia uses boltDB for storage. This makes the installation step super easy. No external database is currently required.

“Universal Kubernetes at Scale” [ developed by SAP ] Gardener implements the automated management and operation of Kubernetes clusters as a service and provides support for multiple cloud providers (Alicloud, AWS, Azure, GCP, OpenStack, …). Its main principle is to leverage Kubernetes concepts for all of its tasks.

In essence, Gardener is an extension API server that comes along with a bundle of custom controllers. It introduces new API objects in an existing Kubernetes cluster (which is called garden cluster) in order to use them for the management of end-user Kubernetes clusters (which are called shoot clusters). These shoot clusters are described via declarative cluster specifications which are observed by the controllers. They will bring up the clusters, reconcile their state, perform automated updates and make sure they are always up and running.

GitHub Package Registry is a software package hosting service, similar to npmjs.org, rubygems.org, or hub.docker.com, which allows you to develop your code and host your packages in one place. You can host software packages privately or publicly and use them as dependencies in your projects.

GitHub Package Registry uses the native package tooling commands you’re already familiar with to publish, query, download, and change package versions and currently supports the following clients and formats: npm, gem, mvn, docker and nugat.

This approach can be useful when you want to host deployed artifacts next to their sources or when an artifact repository is unavailable or you don’t want to maintain one.

Grafeas (“scribe” in Greek) is an open-source artifact metadata API that provides a uniform way to audit and govern your software supply chain. Grafeas defines an API spec for managing metadata about software resources, such as container images, Virtual Machine (VM) images, JAR files, and scripts. You can use Grafeas to define and aggregate information about your project’s components. Grafeas provides organizations with a central source of truth for tracking and enforcing policies across an ever-growing set of software development teams and pipelines. Build, auditing, and compliance tools can use the Grafeas API to store, query, and retrieve comprehensive metadata on software components of all kinds.

This tool is usually a prep for something like Kurtis which helps make sure unsigned vulnerabilities are deployed.

Gravity is a toolkit for creating (and using) self contain “image” that enables installation and management of application on Kubernetes clusters. Using a (YAML based) cluster descriptor, Gravity can run pre-flight tests on your nodes and enable you to add and manage all your cluster resources. Gravity will run on any cloud but its main use case is on-prem infrastructure. It can be used for creating appliances that can be used on air-gaped networks.

Harbor is an open-source cloud-native registry that stores, signs, and scans container images for vulnerabilities.

Harbor solves common challenges by delivering trust, compliance, performance, and interoperability. It fills a gap for organizations and applications that cannot use a public or cloud-based registry or want a consistent experience across clouds.

Harbor is a private registry for docker images and helm charts with user management support, It can manage and synchronize multiple docker registries in different regions.

In summary, if you need a private docker hub like ECR, DockerHub. Harbor is a good candidate.

k3s is wrapped in a simple package that reduces the dependencies and steps needed to run a production Kubernetes cluster. Packaged as a single binary, k3s makes installation and upgrade as simple as copying a file. TLS certificates are automatically generated to ensure that all communication is secure by default.

Used in Edge & IoT to enable a standard distro across the board.

Source: https://github.com/deepmind/kapitan Description: Kapitan is a tool to manage complex deployments using jsonnet and jinja2.

Use Kapitan to manage your Kubernetes manifests, your documentation, your Terraform configuration or even simplify your scripts.

Keycloak is an open-source Identity and Access Management solution aimed at modern applications and services. It makes it easy to secure applications and services with little to no code.

This page gives a brief introduction to Keycloak and some of the features. For a full list of features refer to the documentation.

Trying Keycloak is quick and easy. Take a look at the Getting Started tutorial for details.

Kritis https://github.com/grafeas/kritis(“judge” in Greek), is an open-source solution for securing your software supply chain for Kubernetes applications.

Kritis enforces deploy-time security policies using the Google Cloud Container Analysis API, and in a subsequent release, Grafeas.

This tool helps you control what CVE’s you can deploy to your cluster (known CVE’s) using Kubernetes Admission control.

Kritis Usually works alongside grafeas

Terraform Template to Setup a Kubernetes Cluster on OpenStack/AWS/Azure This project contains a terraform environment to set up a Kubernetes cluster in various IaaS environments. The following IaaS layers are supported yet:

  • Openstack
  • AWS
  • Azure

Kubify supports cluster setup, recovery from an etcd backup file and rolling of cluster nodes. A cluster update is possible as long as the contained components can just be redeployed and the update is done by a rolling update by the Kubernetes controllers. The update of the kubelets is supported without rolling the nodes.

Open Policy Agent (OPA) is a general-purpose policy engine with uses ranging from authorization and admission control to data filtering. OPA provides greater flexibility and expressiveness than hard-coded service logic or ad-hoc domain-specific languages. And it comes with powerful tooling to help you get started.

Here are just a few examples of what you can do with OPA:

  • Kubernetes Admission Control
  • HTTP API Authorization
  • Remote Access
  • Data Filtering with Partial Evaluation

Performance Rust is blazingly fast and memory-efficient: with no runtime or garbage collector, it can power performance-critical services, run on embedded devices, and easily integrate with other languages.

Reliability Rust’s rich type system and ownership model guarantee memory-safety and thread-safety — and enable you to eliminate many classes of bugs at compile-time.

Productivity Rust has great documentation, a friendly compiler with useful error messages, and top-notch tooling — an integrated package manager and build tool, smart multi-editor support with auto-completion and type inspections, an auto-formatter, and more.

Why Because of the confidence, one gains when writing a program in it. Rust’s very strict and pedantic compiler checks each and every variable you use and every memory address you reference. It may seem that it would prevent you from writing effective and expressive code, but surprisingly enough, it’s very much the reverse: writing an effective and idiomatic Rust program is actually easier than writing a potentially dangerous one

A TFX pipeline is a sequence of components that implement an ML pipeline which is specifically designed for scalable, high-performance machine learning tasks. That includes modeling, training, serving inference, and managing deployments to online, native mobile, and JavaScript targets.

Get On The Full Stack Radar

Discover the full range of full stack

About the Tech Radar

The Radar is a new initiative from Tikal to summarize our usage & opinion about certain technology topics in our client solutions.

Our Radar has 4 domains , Backend & ML , DevOps, Frontend and Mobile which are mapped to our main core expertise.

Our Radar has four rings, which are described from the middle:

  • The Try ring is for interested topics that we think you should explore and keep eye on.
  • The Start ring is for topics that we think are ready for use, but not as completely proven as those in the Keep ring.
  • The Keep ring represents topics that we think you should keep using now in the appropriate context.
  • The Stop ring is for topics that are getting attention in the industry, but we don't think you should continue using it.
Thank you for your interest!

We will contact you as soon as possible.

Let's talk

Oops, something went wrong
Please try again or contact us by email at info@tikalk.com