Start

Serverless computing is a cloud computing execution model in which the cloud provider dynamically manages the allocation of machine resources. Serverless is a form of utility computing which helps provide complex infrastructure and security requirements as a part of managed services these services are managed by the provider hence the name serverless { less management of servers }. Serverless should not be confused with the serverless framework …

Serverless framework Serverless.com is a toolkit for deploying and operating serverless architectures, serverless utilizes the different cloud provider’s function and api-gw api’s to expose backend services.

Highlights:

  • Integrates with main cloud providers - AWS, GCP, Azure
  • Has Kubeless plugin
  • Yaml definition of both API gateway and lambdas
  • Utility to test functions locally using a local docker daemon

A good introduction to the subject -> https://coreos.com/blog/introducing-operators.html Mysql Operator -> https://medium.com/oracledevs/introducing-the-oracle-mysql-operator-for-kubernetes-b06bd0608726 Prometheus Operator -> Tensorflow Operator -> Rook.io Operator -> etcd Operator ->

Google functions enable Serverless applications based on GCP IaaS offerings.

Cloud Functions lets application developers spin up code on demand in response to events originating from any api / http request. Serverless architectures utilizing Google Functions integrated with Google Endpoints and BaaS services could build applications that scale from zero to infinity, on demand - without provisioning or managing a single server.

As other serverless and function providers, google’s functions are the best fitted for Backend services such as Firebase, Cloud Datastore, and ML solutions also offered by the GCP.

More info on google functions in the following link

AWS was the first provider to offer functions as a service already in Nov 2014. AWS initially released AWS lambda as an event-driven provisioning/operations and took just under 3 years to become the standard name of serverless and FaaS offerings.

AWS, as it’s competitors, offer Lambdas (a.k.a functions), as a complimentary to their BaaS offerings, stitching together services such as:

  • Incognito
  • S3
  • CloudFormation
  • DynamoDB
  • RDS and many more

These integrations alongside Lambda’s “infinite” scalability and it’s newly introduced (at the time) “Price per 100ms” made it very popular among both startups achiving their MVP and enterprises wishing to scale out or experiment with Serverless and Micro Services Architectures.

AWS lambda provides many orginazations the ability to write functions in a variaty of Software languages and integrates well with many pfameworks and other IaaS/PaaS/BaaS services.

More about AWS Lamda here

We use Nodej.s in DevOps -

  • Write small tools or in serverless framework, lambda tasks (gluing processes)
  • But I wouldn’t say it’s the main language in “devops domain”

Data movement and data processing activities include scheduler , data formatting and chaining data from various sources to various destinations. Workflow of data

AirFlow The Airflow Platform is a tool for describing, executing, and monitoring workflows of data

  • In Airflow, a DAG – or a Directed Acyclic Graph – is a collection of all the tasks you want to run, organized in a way that reflects their relationships and dependencies..

Luigi is a Python package that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization, handling failures, command line integration, and much more.

Azkaban is a batch workflow job scheduler created at LinkedIn to run Hadoop jobs. Azkaban resolves the ordering through job dependencies and provides an easy to use web user interface to maintain and track your workflows.

Kubernetes is an open-source system for automating deployment, scaling and management of containerized application in a cluster.

Kubernetes uses a set of APIs and yaml/json based configuration language to define and maintain all aspects of containers running on a cluster. Including networking, service discovery, proxying and load balancing, horizontal and vertical scaling, security and more. Kubernetes as a service is part of all major cloud providers offering and there are some projects that can deploy Kubernetes cluster automatically on almost every compute environment. Kubernetes introduce the concept of a POD - a set of one or more containers that deployed as a single unit. (Same node, same namespace and same network configuration) PODs can be thought of as lightweight servers constructs from container images. PODs can be deployed using controllers that define the behavior of the POD in the cluster. Commonly used controllers are the ‘Deployment’ controller that define a replica-set to make sure a given number of POD instances will be available in any moment of time. And ‘DaemonSet’ controller that deploy one POD per running worker node in the cluster. Services, running as PODs can be exposed, internally to the cluster or externally to the world, via ‘Service’ configuration object that acts as reverse proxy and simple load balancer that provide a single endpoint for the service. All configuration objects (PODS, Controllers, Services etc…) are loosely coupled via tags and selectors that makes the infrastructure flexible and configurable.

A service mesh offers consistent discovery, security, tracing, monitoring and failure handling without the need for a shared asset such as an API gateway or ESB. A typical implementation involves lightweight reverse-proxy processes deployed alongside each service process, perhaps in a separate container.

These proxies communicate with service registries, identity providers, log aggregators, and so on. Service interoperability and observability are gained through a shared the implementation of this proxy but not a shared runtime instance.

What is a ‘Service Mesh’?

  • All service-to-service communications will take places on-top of a software component called service mesh (or side-car proxy).
  • Service Mesh provides the built-in support for some network functions such as resiliency, service discovery and many more.
  • Service developers can focus more on the business logic while most of the work related to the network communication is offloaded to the service mesh. a good example would be not worrying about circuit breaking or rate limiting when your microservice call another service anymore. That already comes as part of service mesh.
  • Service-mesh is language agnostic: Since the microservice to service mesh proxy communication is always on top to standard protocols such as HTTP1.x/2.x, gRPC etc.

Leading OpenSource service mesh solutions (+ commercial offerings)

Cloud Native Solution With the broad adoption of SOA and Micro Service Architectures and many IaaS, PaaS offerings one of the “lessons learned” is that you need to be able to harness your infrastructure and start treating it in a standardized way.

So What is Cloud Native? The Cloud Native Computing Foundation (CNCF) describes it as “distributed systems capable of scaling to tens of thousands of self healing multi-tenant nodes”. That’s a “how” (distributed systems) and a “why” (high scaleability and automated resilience).

All the above in a complex / polyglot world will only be possible with standards and tooling to enable all these moving parts to suit small to large scale microservice based applications - This is what the Cloud Native Movement and the CNCF will promote in the years to come.

The following principles drive Cloud Native solutions:

  • Treat your own / cloud / hybrid infrastructure as-a-service: run on servers that can be flexibly provisioned on demand.

  • Microservices architecture: individual components are small, loosely coupled.

  • Automated deployments and continuous ly integrated and test -> replace manual tasks with scripts or code.

  • Containerize: package processes with their dependencies making them easy to test, move and deploy.

  • Orchestrate: use standard / commonly used / battle tested orchestration tools.

Chaos Engineering is becoming a discipline in designing distributed systems in order to address the uncertainty of distributed systems at scale.

Chaos Engineering can be thought of as the facilitation of experiments to uncover systemic weaknesses.

These experiments follow four steps:

  1. Start by defining ‘steady state’ as some measurable output of a system that indicates normal behavior.
  2. Hypothesize that this steady state will continue in both the control group and the experimental group.
  3. Introduce variables that reflect real-world events like servers that crash, hard drives that malfunction, network connections that fail, etc.
  4. Try to disprove the hypothesis by looking for a difference in steady state between the control group and the experimental group.

In essence -> The harder it is to disrupt the steady state, the more confidence we have in the behavior of the system. And If a weakness is uncovered, we now have a target for improvement before that behavior manifests in the system at large.

Chaos Engineering was called as such mainly through Netflix’s Chaos Monkey

Read More @ https://principlesofchaos.org/

Jenkins is the widely recognized as the de-facto standard solution for implementing CI and even CD, but there are other leading alternatives:

  • Travis - the GitHub hosted CI
  • CircleCI - a free cloud-based system (it also have an on-premise option)
  • Gitlab-CI - part of GitLab platform
  • TeamCity - JetBrain CI/CD server

Following the hype of DevOps a new buzz word has risen - SRE: Site Reliability Engineering

The bottom line is that SRE is an implementation of DevOps principles, specifically with attention to the operation of company site in high reliability

From wikipedia: DevOps defines 5 key pillars of success: Reduce organizational silos Accept failure as normal Implement gradual changes Leverage tooling and automation Measure everything SRE satisfies the DevOps pillars as follows:[2] Reduce organizational silos SRE shares ownership with developers to create shared responsibility[3] SREs use the same tools that developers use, and vice-versa Accept failure as normal SREs embrace risk[4] SRE quantifies failure and availability in a prescriptive manner using SLIs and SLOs[5] SRE mandates blameless post mortems[6] Implement gradual changes SRE encourages developers and product owners to move quickly by reducing the cost of failure[7] Leverage tooling and automation SREs have a charter to automate menial tasks (called “toil”) away[8] Measure everything SRE defines prescriptive ways to measure values[9] SRE fundamentally believes that systems operation is a software problem

Recommendations:

It seems SRE is a valid buzz word - there is demand for professionals called ‘SREs’ As to the substance - the technologies are the same as the ones in the DevOps realm, the implementation is more specific to what was once called “Ops” We should relate to the role of SREs in all DevOps presentations so as to join the Hype

TICK (Telegraf InfluxDB Chronograf Kapacitor) Modern Time Series Platform, designed from the ground up to handle metrics and events. InfluxData’s products are based on an open source core. This open source core consists of the projects—Telegraf, InfluxDB, Chronograf, and Kapacitor; collectively called the TICK Stack.

Telegraf is a plugin-driven server agent for collecting and reporting metrics.

  • Telegraf has plugins or integrations to source a variety of metrics directly from the system it’s running on, to pull metrics from third party APIs, or even to listen for metrics via a StatsD and Kafka consumer services. It also has output plugins to send metrics to a variety of other datastores, services, and message queues, including InfluxDB, Graphite, OpenTSDB, Datadog, Librato, Kafka, MQTT, NSQ, and many others.

InfluxDB - Is a Time Series Database built from the ground up to handle high write & query loads.

  • InfluxDB is a custom high performance datastore written specifically for timestamped data, including DevOps monitoring, application metrics, IoT sensor data, and real-time analytics. Conserve space on your machine by configuring InfluxDB to keep data for a defined length of time, and automatically expiring and deleting any unwanted data from the system. - InfluxDB also offers a SQL-like query language for interacting with data.

Chronograf is the administrative user interface and visualization engine of the platform.

  • It makes the monitoring and alerting for your infrastructure easy to setup and maintain. It is simple to use and includes templates and libraries to allow you to rapidly build dashboards with real-time visualizations of your data and to easily create alerting and automation rules.

Kapacitor is a native data processing engine. It can process both stream and batch data from InfluxDB.

  • Kapacitor lets you plug in your own custom logic or user-defined functions to process alerts with dynamic thresholds, match metrics for patterns, compute statistical anomalies, and perform specific actions based on these alerts like dynamic load rebalancing. Kapacitor integrates with HipChat, OpsGenie, Alerta, Sensu, PagerDuty, Slack, and more.

Think of SecOps as a management approach that bridges the gap to connect security and operations teams, in much the same way that DevOps unifies software developers and operations professionals.

SecOps links the security & operations teams together to work with shared accountability, processes, and tools to ensure that you do not have to sacrifice security to maintain a commitment to uptime and performance. Especially with the growing demand for on-demand / auto-scalable solutions, alongside compliance standards such as (hipaa](https://www.hhs.gov/hipaa/index.html), GDPR and many others.

What we can see with integration like Let’sencrypt istio and kube2iam and others dealing with the complexities of automating security.

Kubeless is a Kubernetes-native serverless framework that lets you deploy small bits of code (functions) without having to worry about the underlying infrastructure. It is designed to be deployed on top of a Kubernetes cluster and take advantage of all the great Kubernetes primitives. If you are looking for an open source serverless solution that clones what you can find on AWS Lambda, Azure Functions, and Google Cloud Functions, Kubeless is for you!

Kubeless Includes:

  1. Support for Python, Node.js, Ruby, PHP, and custom runtimes
  2. CLI compliant with AWS Lambda CLI
  3. Event triggers using Kafka messaging system and HTTP events
  4. Prometheus monitoring of functions calls and function latency by default
  5. Serverless Framework plugin

Read more @ kubeless.io

Keep

Python is a powerful high-level, interpreted, open source, object-oriented programming language, portable, extensible and embeddable with simple syntax and large standard libraries to solve common tasks. Python is a general-purpose language. It has wide range of applications from Web development (like: Django and Bottle), scientific and mathematical computing (Orange, SymPy, NumPy) to desktop graphical user Interfaces (Pygame, Panda3D).

Apache Groovy is an object-oriented programming language for the Java platform. It is a dynamic language with features similar to those of Python, Ruby, Perl, and Smalltalk. It can be used as a scripting language for the Java Platform, is dynamically compiled to Java virtual machine (JVM) bytecode, and interoperates with other Java code and libraries. Groovy uses a Java-like curly-bracket syntax. Most Java code is also syntactically valid Groovy, although semantics may be different.

We use Groovy mostly for programming Jenkins-code (Jenkinsfile and Jenkins shared libraries) which are Groovy-based.

Ansible is a simple automation language that can perfectly describe an IT application infrastructure. It’s easy-to-learn, self-documenting, and doesn’t require a grad-level computer science degree to read. Ansible has agentless architecture, it is used for Configuration management, it orchestrates the app lifecycle and deployment flows.

Ansible is the most popular deployment tool we use.

Helm, short for Helm Charts which help manage Kubernetes applications — Helm Charts helps you define, install, and upgrade even the most complex Kubernetes applications.

Charts are easy to create, version, share, and publish. There are Helm Registry offerings from both Quay.io and Artifactory. HELM helps to manage Services, Deployment, Secrets and all part related to Kubernetes ready applications.

Apache Kafka is a distributed streaming platform.

Kafka has three key capabilities:

  • Publish and subscribe to streams of records, similar to a message queue or enterprise messaging system.
  • Store streams of records in a fault-tolerant durable way.
  • Process streams of records as they occur.

Kafka is generally used for two broad classes of applications:

  • Building real-time streaming data pipelines that reliably get data between systems or applications
  • Building real-time streaming applications that transform or react to the streams of data

A more comprehensive definition could be read here

Prometheus is an open-source system monitoring and alerting toolkit originally built at SoundCloud. Prometheus uses a pull model to scrape endpoint which exposes metric data in form of HTTP/s via standardized exporters such as JMX, MySql, advisor and node_exporter or via Prometheus client libraries implementations within common frameworks and SW languages.

Prometheus consists of a centralized server (written in go) which implements:

  1. A time series database,
  2. An endpoint collection mechanism based on common service discovery providers varying from all common cloud provider api’s, service discovery services such as etcd / consul and of course docker and kubernetes and even more
  3. A web interface which also provides the PromQL interface for the time series data queries.

Additional (Optional) components:

  1. push gateway - a server which provides monitoring capabilities for short-term/stateless services to push metrics (over the default pull method)
  2. node-exporter - the official os level metric exporter
  3. JMX exporter - java application monitoring + common JVM related metrics
  4. Client libraries for all common languages nodejs, go, java, python …
  5. many many more

Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Docker Compose is used for running multi-container applications on single machine or on multiple machines with Swarm mostly in Dev and QA environments, and in some places in Production as well.

ELK is short for Elasticsearch Logstash Kibana.

Elasticsearch is a search engine based on Lucene. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents

Logstash is one of the pioneers of log forwarding and processing and has become a standard for shipping logs to Elasticsearch (and other desinations).

Kibana lets you visualize your Elasticsearch data and Discover events, create and share dashboards and more.

In the past 3 years or so we might also see the “L” replaced by “F” representing “the Beats from Elasticsearch”, fluentd and others.

During development you generate a fair amount of different artifacts. These might include:

  • The source code
  • The compiled application
  • A deployable package
  • Documentation and potentially others as well

While you could use a source control system to store all of them, it’s usually massively inefficient, as source control systems are usually designed to handle text based files, and not binary files. You might be able to use them as a simple storage mechanism, if most of your releases are text based, and you don’t have to store a lot of binary data.

Artifact repositories however are designed to store all kinds of files, including binary ones. This includes anything from zipped up source codes, to build results, to things like docker containers as well. Also, they usually not only store these artifacts but also help manage them using various additional functions

Common artifact-repositories are:

  • Artifactory, Nexus - mostly for code-build-artifacts (maven-based)
  • Nuget - for .NET packages
  • ECR (Amazon Elastic Container Registry), Docker-hub - for Docker-based container/images storage.

Grafana’s motto is “The analytics platform for all your metrics”, in the past ~5 years or so Granfana has done just that, becoming a centralized hub of data processing and visualization. Grafana has a big Datasource variety supported, enabling processing and real-time visualization of time series data originating from many data sources simultaneously. From traditional databases such as MySql and Postgresql and to time series databases such as OpenTSDB, Graphite, Prometheus, Elasticsearch, and others.

Grafana also provides an extensible plugin interface adding the ability to enrich both visualizations with plugins or custom “data sources”.

The Suger Coating of Grafana is the Grafana community website, which maintains a centralized hub for hosting plugins and dashboard which the community can download & manage via source control. In many cases, there is either a ready dashboard for your use case or at least a good starting point.

Grafana also supports an HA installation method, authentication [Basic/LDAP/OAuth/etc] & authorization schemes and organization management.

Read more

Jenkins Pipeline (or simply “Pipeline” with a capital “P”) is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins.

A continuous delivery (CD) pipeline is an automated expression of your process for getting software from version control right through to your users and customers. Every change to your software (committed in source control) goes through a complex process on its way to being released. This process involves building the software in a reliable and repeatable manner, as well as progressing the built software (called a “build”) through multiple stages of testing and deployment.

Pipeline provides an extensible set of tools for modeling simple-to-complex delivery pipelines “as code” via the Pipeline domain-specific language (DSL) syntax.

The definition of a Jenkins Pipeline is written into a text file (called a Jenkinsfile) which in turn can be committed to a project’s source control repository. This is the foundation of “Pipeline-as-code”; treating the CD pipeline a part of the application to be versioned and reviewed like any other code.

Out of the 2 options of ‘scripted Pipeline’ and ‘Declarative Pipeline’, it is mostly recommended to use the ‘Declarative Pipeline’ option.

Jenkins pipeline (as part of Jenkins) is in use by wide range of our customers and is currently the preferred orchestration tool offered by Tikal.

Service discovery is the practice of automatic detection of devices and services offered/hosted by a computer network. Service discovery has become the heart of SOA and MSA SW architectures and in the world of containers has become a method of establishing a distributed and dynamic configuration for containers/microservices.

Service discovery providers such as etcd, consul, cloud metadata services such as aws / gcp / azure are also considered authoritative service discovery providers. Many OpenSource project HA configures rely on service discovery in order to establish clusters, among many other applications like Kubernetes, Swarm, Mesos, RabbitMQ & Prometheus comes with a built-in support for service discovery which makes their integration seamless.

Service discovery comes in many forms and shapes and we will most definitely be seeing more and more providers/integration in the coming future.

A Time Series Database (TSDB) is a database optimized for time-stamped or time series data. Time series are simply measurements or events that are tracked, monitored, downsampled, and aggregated over time. This could be server metrics, application performance monitoring, network data, sensor data, events, clicks, trades in a market, and many other types of analytics data. The key difference with time series data from regular data is that you’re always asking questions about it over time.

There are quite a few popular Open Source Time Series Databases which are widely used some of them are listed below:

  1. DalmatinerDB
  2. InfluxDB
  3. Prometheus
  4. Riak TS
  5. OpenTSDB
  6. KairosDB
  7. Elasticsearch
  8. Druid
  9. Blueflood
  10. Graphite (Whisper)

A great source to read here

Terraform is a provisioning tool (as opposed to a Configuration Management tool), working with Immutable Infrastructures, using a declarative language, is masterless and agentless

Being a popular tool, from a well established vendor, HashiCorp we should certainly keep offering Terraform as an alternative to our customers

Terraform has recently announced support of Kubernetes, going to show that HashiCorp is on it’s feet and adapting to changes

AWS - Native Tools

Amazon Web Services has in the past delivered new tools at a fast pace and it does not seem like it is going to slow down soon.

Many Open Source Services (e.g. CI with Jenkins) can now be performed with AWS tools - albeit with lesser features - but the basic job (in this case of CI) can be performed with AWS CodeBuild, CodePipeline, CodeDeploy, ElasticBeanstalk & CloudFormation

Amazon is covering more and more ground in supplying 3rd party software as services in AWS and delivering easy access to simple end points: RDS (MySQL, Postgress etc) ElasticCache (Redis) ECS & Fargate (Docker) It won’t be surprising to see AWS supplying one of the following as services on the cloud platform soon: Cassandra Vertica MongoDB Kafka

AWS covers big data computation in-house as well with Aamzon Redshift, AMRs

Together with AWS monitoring and alerting solutions the full scope of DevOps needs are covered by Amazon and will continue to be developed

Many startup companies find it beneficial to “do” their DevOps with AWS tools alone at start, and end up vendor locked. We need to be able to better support cusotmers that use AWS tools and at the same time be able to migrate them to 3rd party tools (Kubernetes, Ansible, Jenkins etc)

Stop

It seems like the utilization of Ruby as a software development language which was very popular in the DevOps movement mainly with tools like Chef and Puppet Logstash Fluentd and a lot of scripting and utilities around ruby and ruby on rails application lifecycle such as Capistrano, has taken a punch in favor of python go and javascript. With the rise of popularity of these langs & frameworks, we see less and less use of ruby.

We assume that as there will always be something written in ruby but it will most definitely not be the language of choice when we are required to develop a utility / micro-app.

We haven’t seen a use of Perl for a long time in our domain. Since there are better options, like Python scripts, Ansible and etc., we don’t see a reason to keep using it.

The cluster management and orchestration features embedded in the Docker Engine

Stop Use it

  • K8s (Cloud native) became standard in the industry
  • The Docker platform is getting support for Kubernetes. This means that developers and operators can build apps with Docker and seamlessly test and deploy them using both Docker Swarm and Kubernetes.
  • Most of the Saas support K8s deployment
  • looks like in most of the case when you going to production with container cluster it’s will be k8s

OpenStack is a free and open-source software platform for cloud computing, mostly deployed as infrastructure-as-a-service (IaaS), whereby virtual servers and other resources are made available to customers. The software platform consists of interrelated components that control diverse, multi-vendor hardware pools of processing, storage, and networking resources throughout a data center. Users either manage it through a web-based dashboard, through command-line tools, or through RESTful web services.

On top of OpenStack core services, that provide the basic cloud functionality of managing compute, storage and networking resources. OpenStack project contain a set of services (sometimes named ‘The big tenet’) that provide a lot of complementary functionality such as shared storage, databases, monitoring, orchestrating and much more.

The creation and the maintenance of the “Batch Build Process” are part of R&D and not of the devops

  • Maven, gradle and SBT , have structured and common practice that are more “Industry standard” , and in most cases the developer familiar and maintain the build
  • Build tools like gradle use groovy dsl and in some cases the “build” is like any other code
  • The devops responsibility is to invoke it from ci manager

ESB An enterprise service bus (ESB) implements a communication system between mutually interacting software applications in a service-oriented architecture (SOA).

This architecture design has been practiced less and less with the emerging Cloud Native era, ESB’s are being replaced by Message Brokers/Queues and Service Meshe’s which implement a similar control plane architecture but in a more dynamic and distributed manner than in ESB which implements a centralized tightly coupled one which contradicts with the dynamic nature of Microservices.

With the rise of Message queues such as Kafka and standards such as AMQP and many others, ESB’s have become obsolete furthermore the Service Mesh we see today implements the same capabilities enabling routing standards to be shared within the Mesh enabling many more features ESB’s could not handle due to its level of operation in the stack.

We assume we will see ESB’s still maintained in corporate empowered by monolith and slow changing applications but as they move to microservices this component will probably become obsolete.

Start

Serverless computing is a cloud computing execution model in which the cloud provider dynamically manages the allocation of machine resources. Serverless is a form of utility computing which helps provide complex infrastructure and security requirements as a part of managed services these services are managed by the provider hence the name serverless { less management of servers }. Serverless should not be confused with the serverless framework …

Serverless framework Serverless.com is a toolkit for deploying and operating serverless architectures, serverless utilizes the different cloud provider’s function and api-gw api’s to expose backend services.

Highlights:

  • Integrates with main cloud providers - AWS, GCP, Azure
  • Has Kubeless plugin
  • Yaml definition of both API gateway and lambdas
  • Utility to test functions locally using a local docker daemon

Although Python is a seasoned language started back at the end of the 90’s, in the recent years its use is accelerated greatly. With a fast learning curve and ease of use, it is only natural to see it flourish in more and more areas in computer programming. The rise of the use of Machine Learning brought Python to the center, as the most popular libraries of machine learning have API in Python. Together with popular mathematical and data analysis libraries such as numpy and pandas, made Python the natural choice for data scientists and mathematicians. Python also made its way into additional areas such as Big Data, microservices, and DevOps.

Polyglot programming is an emerging paradigm in the software development world. The main idea behind this paradigm is that a certain task should be written in a language that is most suitable for. A polyglot programmer is a person which is able to program a production level component in more than one language. This ability gives the programmer a broader choice of the adequate language for the task. For instance, a multithread requirement is more suitable to be implemented in JVM based languages while linear algebra is faster and easier to implement in Python. The microservices movement emphasized, even more, the need for polyglot expertise. In a single microservices system, different components are implemented in different languages and communicate between them using language-free communication methods such as REST and messaging. Nowadays, programming teams are required to support systems that are implemented in more than one language.

Kafka Streams is a client library for building applications and microservices, where the input and output data are stored in Kafka clusters. It combines the simplicity of writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka’s server-side cluster technology.

The library allows developing stateful stream-processing applications that are scalable, elastic, and fully fault-tolerant. The main API is a stream-processing DSL that offers high-level operators like filter, map, grouping, windowing, aggregation, joins, and the notion of tables

Druid is a data store used mostly for analytic applications. It is designed for sub-second queries on real-time and historical data. Druid provides real-time data ingestion, flexible data exploration, and fast data aggregation. Existing Druid deployments have scaled to trillions of events and petabytes of data.

We think it’s a good selection for real-time analysis datastore with high amount of data.

Edge Computing (a.k.a. FOG Computing) refers to the computing method where data is processed near the source of the data at what is being called the “edge of the network”. Smaller devices are being used as computing power such as laptops, smart phones, gateways, sensors and other microprocessors. Edge vs Cloud Computing As opposed to “cloud computing” where all data is being sent to a centralized storage and computing engine much of the critical processing and decision making is being done on devices and gadgets near the source of the data. Benefits Processing at the edge lowers dependence on the cloud and better manages the massive deluge of data being generated by the IoT. It also reduces latency for critical applications makes connected applications more responsive and robust . Explosion of devices Gartner estimates that there will be 25 billion things connected to the Internet by 2020. The IoT is exploding because it is now relatively cheap to embed devices with smart sensors. Needless to say that the move to edge computing is crucial and inevitable.

Blockchain is probably the hottest ever buzzword in tech. Hotter than AI, hotter than VR. Surely hotter than Kubernetes. $6B were raised in ICOs by blockchain related startups in 2017, this is more than the entire VC money invested in early stage startups.

Is blockchain just a buzz? Chris Dixon who is a VC that does believe in Blockchain says: [blockchain] is probably the largest and smartest software engineering organization in the world. There are probably 20,000 programmers, if not more, working on this stuff. There’s a lot of really really smart people. I spend a lot of time with them and I’m just constantly impressed. Do you want to bet against those 20,000 really smart engineers who are like super passionate about this and working on it all the time? I wouldn’t want to bet against them.

The $6B is going to be used mainly in R&D in the coming years, and I see 3 main opportunities for Tikal here:

  • Support development of infrastructure projects: Most of the bigger ICOs are infrastructure projects. i.e blockchains and technologies that will enable future decentralized apps development. HyperLedger seems to be taking a major role in this emerging world.
  • Support development of ethereum dapps: Ethereum is a rather mature blockchain and very popular. It already has vast dev tools such as Truffle, Ganache, OpenZeppelin, MetaMask and many more. This is the 1st blockchain were demand for expertise in such tools for developing dapps will be required
  • Support usage of edge clouds based on blockchains: two major projects are Golem and Dfinity who aim to decentralize the cloud. It is not far fetched to believe that they will be major competition for AWS, Google Cloud and Azure and expertise in them will be required (DevOps)

Kotlin is a statically typed programming language that runs on the Java virtual machine and also can be compiled to JavaScript source code or use the LLVM compiler infrastructure. Its primary development is from a team of JetBrains programmers based in Saint Petersburg, Russia.[2] While the syntax is not compatible with Java, the JVM implementation of Kotlin’s standard library is designed to interoperate with Java code and is reliant on Java code from the existing Java Class Library, such as the collections framework[3]. Kotlin uses aggressive type inference to determine the type of values and expressions for which type has been left unstated. This reduces language verbosity relative to Java, which demands often entirely redundant type specifications prior to version 10.

GO is A programming language introduced by Google in 2009. It is a compiled and strongly typed language similar to C, but bringing a much more intuitive syntax. Golang is basically a functional language, rather than a strict OOP, designed for high performance (as compiled to a native machine code), without the bother to deal with thread synchronization.

Our Perspective is to define distinct aspects in which GoLang may give better performance than the ‘standard’ stack of Java / Python / NodeJS, in backend development.

ClickHouse is an open source column-oriented database management system capable of real time generation of analytical data reports using SQL queries.

The Event Sourcing pattern defines an approach to handling operations on data that’s driven by a sequence of events, each of which is recorded in an append-only store. Application code sends a series of events that imperatively describe each action that has occurred on the data to the event store, where they’re persisted. The events are persisted in an event store that acts as the system of record (the authoritative data source) about the current state of the data. The store acts as the system of record and can be used to materialize the domain objects. This can simplify tasks in complex domains, by avoiding the need to synchronize the data model and the business domain, while improving performance, scalability, and responsiveness. It can also provide consistency for transactional data, and maintain full audit trails and history that can enable compensating actions.

Spring 5 introduces Reactive programming into its framework, in the core and web libraries. Also introducing WebFlux as a reactive alternative to WebMVC. This allows developing a reactive, i.e. asynchronous and non-blocking, Spring application. Mastering Spring 5, will:

  1. Allow adding reactive capabilities to existing Spring applications
  2. Explore more reactive libraries and options, when facing the need to choose a relevant framework.
  3. Allow backend developers to advance to working with Spring 5, and fully understand it’s capabilities, advantages and disadvantages.

The core of Apache Flink is a distributed streaming dataflow engine written in Java and Scala. Flink executes arbitrary dataflow programs in a data-parallel and pipelined manner. Flink’s pipelined runtime system enables the execution of bulk/batch and stream processing programs. Furthermore, Flink’s runtime supports the execution of iterative algorithms natively. Flink provides a high-throughput, low-latency streaming engine as well as support for event-time processing and state management. Flink applications are fault-tolerant in the event of machine failure and support exactly-once semantics

Aerospike is a new strongly-consistent high-performance SQL database, aimed for transactional systems. This DB is both horizontal and vertical scalable with (as claimed) fast development and on boarding processes.

Scikit Learn Python Library, Scikit Learn is one of the most popular Python libraries that provides a simple and efficient tool for data mining and data analysis. It includes Machine Learning services such as classification, regression, clustering, dimensionality reduction, model selection and data preprocessing. It is based on the fast and popular NumPy, Scipy and matplotlib Python libraries. It is one of the most popular choices for machine learning engineers and is being updated regularly by a vast number of contributors.

Spark MLlib is a Machine Learning library that comes on top of Apache Spark. It is the best solution for applying Machine Learning algorithms on big amounts of data that is stored in distributed file systems, like HDFS or in the cloud.

we promote Spark MLlib as a library for Machine Learning as it is part of the ecosystem of Spark and is most suitable for Big Data.

Apache Beam is an open source, unified model for defining both batch and streaming data-parallel processing pipelines. Beam supplies an sdk in different languages (java, python, scala) to express the flow and processing of the data). The sdk then applies the runner according to the chosen platform (spark, flink, gear…). Beam aims to lead the industry on the proper way to deal with batch and streaming in the same pipeline. In addition Beam addresses issues like process time vs event time, and handling late data.

Event Streaming platforms, namely Kafka, became very popular. In the past, Kafka was mostly used for source of streaming and for source of Data Lale that is the Source of Truth. This new approach suggests that Kafka itself can become the Source of Truth, and not only the source to another Data Lake. With this approach, instead of making sure that the data is persisted to another persistence layer, we just leave it in Kafka forever, and query it directly from Kafka when needed. This does not replace any kind of View Database that shall be used in case of need to query data by an application, but rather a location from each data can be retrieved in case we need historical data.

Domain-driven design is a software development/architecture approach that place the focus on development domain logic first, this in contrast to the classical “actor modal” approach.

Solidity is a contract-oriented, high-level language for implementing smart contracts. It was influenced by C++, Python and JavaScript and is designed to target the Ethereum Virtual Machine (EVM).

Solidity is statically typed, supports inheritance, libraries and complex user-defined types among other features.

Solidity is currently very isolated, meaning it doesn’t support any outbound comms, can only use data that it has received as input, or the data in the blockchain it is working against. It can’t access the internet for APIs or any other data consumption.

Another very unique feature of working with Solidity is that bugs can have crucial consequences, as a deployed smart contract is by nature ever immutable, meaning you can’t change, upgrade nor override deployed code. This historically led to humongous crisis, and enhance the vulnerability and security considerations when deploying solidity code. Read this from the docs: “While it is usually quite easy to build software that works as expected, it is much harder to check that nobody can use it in a way that was not anticipated.”

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

  • GraphQL queries always return predictable results.
  • While typical REST APIs require loading from multiple URLs, GraphQL APIs get all the data your app needs in a single request.
  • GraphQL uses types to ensure Apps only ask for what’s possible and provide clear and helpful errors.

GraphQL link

The Apollo Platform is a family of technologies you can incrementally add to your stack: Apollo Client to connect data to your UI, Apollo Engine for infrastructure and tooling, and Apollo Server to translate your REST API and backends into a GraphQL schema.

  • Apollo Client Bind data to your UI with the ultra-flexible, community-driven GraphQL client for React, JavaScript, and native platforms.
  • ** Apollo Server** Translate your existing REST APIs and backends into GraphQL with this powerful set of tools for buliding GraphQL APIs.
  • Apollo Engine The GraphQL gateway that provides essential features including caching, performance tracing, and error tracking.

Apollo link

gRPC (Google Remote Procedure Call) is an open source remote procedure call (RPC) system initially developed at Google. It uses HTTP/2 for transport, Protocol Buffers as the interface description language, and provides features such as authentication, bidirectional streaming and flow control, blocking or nonblocking bindings, and cancellation and timeouts. It generates cross-platform client and server bindings for many languages (C++, Java, Python Node…).

Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes.

We use Presto like Hive on Data Lake. It can generally provide better response time than Hive

TensorFlow is an open source software library for high-performance numerical computation. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices. It comes with strong support for machine learning and deep learning and the flexible numerical computation core is used across many other scientific domains. TensorFlow is arguably the most popular machine learning and deep learning library as of 2017. Most of the companies practicing machine learning are using TensorFlow in different levels of their product. TensorFlow also contains a submodule called TensorFlow Serving. TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. It makes it easy to deploy new algorithms and experiments while keeping the same server architecture and APIs.

Keep

Robust, Age Proven and Renewing The Java language although being with us already for a long time is renewing itself. Starting form Java 9 we now see a new version of Java each 6 months. This means more features much faster. I still believe that using Java gives a very strict and obvious structure to the program which is sometimes lacking in other programming languages.

Framework Support The rise if Reactive Frameworks (Vert.x, Spring Reactor and others) and Micro frameworks (Spark Java, javalite, ligth4k) make java very versatile in it’s use and use cases. With the above mentioned frameworks we can now easily program reactive systems and micro services with minimal footprint. Java is always going with the Trends (although sometimes a bit behind) as we can see with Docker support in the JVM just recently.

The Java community is a huge community one which cannot be neglected.

Apache Spark is an open-source cluster-computing framework. It is used mostly to process big data and mostly used in clouds. Apache Spark has a Core package and provides 4 more libraries: SparkSQL, Spark Streaming, Spark MLlib and GraphX

We use Apache Spark as a preferred solution for Big Data processing because it has matured and has penetrated the market with big community.

Akka is a free and open-source toolkit and runtime simplifying the construction of concurrent and distributed applications on the JVM. Akka supports multiple programming models for concurrency, but it emphasizes actor-based concurrency, with inspiration drawn from Erlang

Vert.x is a toolkit to build distributed reactive systems on the top of the Java Virtual Machine using an asynchronous and non-blocking development model. As a toolkit, Vert.x can be used in many contexts: in a standalone application or embedded in a Spring application. Vert.x and its ecosystem are just jar files used as any other library: just place them in your classpath and you are done. However, as Vert.x is a toolkit, it does not provide an all-in-one solution, but provides the building blocks to build your own solution.

The fastest growing development platform in history, bundle with npm - the largest dependencies registry to date, it is the “go to” solution for startups and enterprise alike.

Apache Kafka is a distributed streaming platform.

Kafka has three key capabilities:

  • Publish and subscribe to streams of records, similar to a message queue or enterprise messaging system.
  • Store streams of records in a fault-tolerant durable way.
  • Process streams of records as they occur.

Kafka is generally used for two broad classes of applications:

  • Building real-time streaming data pipelines that reliably get data between systems or applications
  • Building real-time streaming applications that transform or react to the streams of data

A more comprehensive definition could be read here

Grafana’s motto is “The analytics platform for all your metrics”, in the past ~5 years or so Grafana has done just that, becoming a centralized hub of data processing and visualization.

Grafana has a big variety of Datasources it supports, enabling processing and real-time visualization of time series data originating from many data sources simultaneously. From traditional databases such as MySql and Postgresql and to time series databases such as OpenTSDB, Graphite, Prometheus, Elasticsearch, and others.

Grafana also provides an extensible plugin interface adding the ability to enrich both visualizations with plugins or custom “data sources”.

The Suger Coating of Grafana is the Grafana community website, which maintains a centralized hub for hosting plugins and dashboard which the community can download & manage via source control. In many cases, there is either a ready dashboard for your use case or at least a good starting point.

Grafana also supports an HA installation method, authentication [Basic/LDAP/OAuth/etc] & authorization schemes and organization management.

Read more

Express server project is the essence of Node.js ecosystem: minimalistic, non-opinionated, fast and easily expendable. Easy to use and not so hard to master, every Node.js developer should know the ins and outs of this library.

OAS-Spec is a standard concerning APIs expressed as URI endpoints (namely HTTP and HTTPS URLs). The standard specifies how to describe them in a way that is:

  • easy to work on for humans
  • machine readable - which means, validatable and once valid - used as source for code-generations.

Code generations may include:

  • generated clients
  • generated server-stubs (waiting to be filled with BL by developers)
  • generated tests that test the servers
  • generated generic UI to interact with the server
  • generated human readable & browsable interactive documentation

Which means, once you got your generators in place, all you need is to provide the OAS spec doc, and enjoy all the generated solutions.

The standard is embraced by amazon, and more.

For old timers it looks like a quicker lighter and cooler version of It’s predecessor - the WSDL, and it’s modernity signifies the maturing of the developers community.

https://nordicapis.com/what-should-you-consider-before-openapi-adoption/

Serverless computing is a cloud computing execution model in which the cloud provider dynamically manages the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity. The idea is to provide only actual code and let the underlying cloud provider manage the actual infrastructure

Firebase Platform

Firebase is a powerful platform for Mobile Android, iOS and WebApps.

Firebase gives you functionality like analytics, databases, messaging and crash reporting so you can move quickly and focus on your users. Firebase is built on Google infrastructure and scales automatically.

Firebase gives you many services such as: analytics, Auth, Realtime database, Cloud messaging, Storage, Crash reporting and more.

Using One Firebase Console to control the services and data is provided. It is easy to use and setup by using Firebase SDK libraries. Usually, in order to setup most of the services there are wide range of documentation and code examples.

Reactive Programming is a paradigm that changes the direction of the flow. Instead of creating threads and blocking for communications, we create pipelines where data is pass though and each layer can react to the data received. The indirection changes the code flow from blocked to async.

Over time the reactive extensions has become a standard for adding higher level functionality on the stream of data while being agnostic to the data itself. The extensions are based on the observable pattern, and allow for cancellation of data process in stream

A data lake is a method of storing data within a system or repository, in its natural format, that facilitates the collocation of data in various schemata and structural forms, usually object blobs or files. The idea of data lake is to have a single store of all data in the enterprise ranging from raw data to transformed data which is used for various tasks including reporting, visualization, analytics and machine learning. The data lake includes structured data from relational databases (rows and columns), semi-structured data (CSV, logs, XML, JSON), unstructured data (emails, documents, PDFs) and even binary data (images, audio, video) thus creating a centralized data store accommodating all forms of data.

We recommend our customers to move from central Data Warehouse into Data Lake architecture. This is a good approach to keep a Single Source of Truth that is the raw data and not processed data.

Stop

While Ruby is a truly beautiful language, designed for the humans who write it first, and the computers who read it second, it has failed to gain traction, becoming a niche language. True, it allows for super quick prototyping, and it scales very well, but it also requires a very different mindset than other languages, causing a strange side-effect in its users - moving away from it (or towards it) is a very difficult task, requiring the developer to unlearn a whole lot of what they know until now. For developers, the fact less new companies choose Ruby as their main language means that the demand is dwindling.

For CTOs, choosing Ruby as the main language is a difficult decision to make because there is very little supply of developers, making talent acquisition cumbersome and expansive (either hiring a very expansive experienced developer or teaching Ruby to a developer who does not know the language yet)

On the bottom line - Ruby is a fun language to learn, but because supply and demand are both on a downward path, it is not the best path to go down

It seems like the utilization of Ruby as a software development language which was very popular in the DevOps movement mainly with tools like Chef and Puppet Logstash Fluentd and a lot of scripting and utilities around ruby and ruby on rails application lifecycle such as Capistrano, has taken a punch in favor of python go and javascript. With the rise of popularity of these langs & frameworks, we see less and less use of ruby.

We assume that as there will always be something written in ruby but it will most definitely not be the language of choice when we are required to develop a utility / micro-app.

Maintainability It is very hard to maintain a monolith application. The Design in the beginning might be a simple one but soon enough things start to become very complex, even too complex. Adding a small feature or simply fixing a small bug can take days just because so many dependencies and interactions. Deployment and Scalability Deploying such a large system is complex at best and there is also a large impact on startup times and memory footprint. Scaling such a monolith is expensive and tedious.

The main problem of the monolith application is that it does not stop growing.

Start

Flow is a static type checker for your JavaScript code developed and promoted by Facebook as an alternative to TypeScript. Flow uses static type annotations. This typing can also be done by flow implicitly by analyzing the code. For example - With type declaration:

// @flow
function square(n: number): number {
  return n * n;
}

square("2"); // Error!

and without:

// @flow
function square(n) {
  return n * n; // Error!
}

square("2");

Flow uses a compiler (Babel or flow-remove-types as the code compiler to remove the flow type annotations for the runtime JS.

We believe that Flow can provide the safety of type checking with minimal impact on the amount and style of code.

ES2017 are new standards that got into the JS language, partially supported by browsers and engines. New language features such as async / await, new browser API such as shared memory and so on.

Vue.js is a javascript framework that has been around since 2014 but is becoming more and more popular in the js/ frontend community and considered (today) the third most used framework (after react and angular). Vue.js is designed in such a way that it can be incrementally adoptable and scale between a library and a framework depending on different use cases - from a view layer only in part of the application up to a full-blown ecosystem for complex Single Page Applications.

Styled-Components is a library that easily allows developers to take advantage of the best of all worlds of modern CSS styling with a minimal setup effort. It enables CSS modules default to avoid styles collision. It enables complex style hierarchies, functions, and variables without setting up complex Webpack configuration. We think this is a very efficient and powerful solution for modern web app’s styling.

Consider the following benchmark for web-frameworks: https://github.com/hbakhtiyor/node-frameworks-benchmark

The benchmark is simple and reproducible.

From the results one can see Koa performs quite well. /With 29k stars and 1.8k forks on github - it worths a try and a follow-up. https://github.com/koajs/koa

What is it? Prettier is an opinionated code formatter. It enforces a consistent style by parsing your code and re-printing it with its own rules.

Why does it matter? Having a code “style guide” is very valuable for a project, it minimizes friction between developers that can arise from different coding styles and it helps to focus on the code instead of focusing on formatting.

How does it work? Maintaining a codebase that is readable for everyone is not an easy task without automating the formatting process. Prettier is an automatic code formatter that comes preconfigured with “best practice” and standard rules that can be overridden via a configuration file.

When to use it? In any project! developed by the people that built React and react-Native and used by a lot of major projects - Prettier is a must have in any project.

Parcel is a very fast, ‘zero config bundler’, and therefore is a very appealing alternative for Webpack (small/medium project).

The goal of this project is to create a workshop for creating a vanilla/react boilerplate with parcel. https://parceljs.org/ https://github.com/parcel-bundler/parcel https://medium.freecodecamp.org/all-you-need-to-know-about-parcel-dbe151b70082 https://codeburst.io/first-impressions-with-parcel-js-eb81fdcc1282

Stenciljs is a Compiler for building Web Components that can be used in a JavaScript Projects(Angular, React, Vue) or in a vanilla project. The produced code also includes: A tiny virtual DOM layer Efficient one-way data binding An asynchronous rendering pipeline (similar to React Fiber) Lazy-loading

Stenciljs do not use the shadow dom

Stenciljs

Storybook has become an essential tool in the component development toolbox. It allows for development and showcasing of components easily with minimal environment configuration. Many libraries use storybook to create component catalogs.

Storybook is easily pluggable and there are very useful plugins.

A reference storybook by AirBnB can be found at [http://airbnb.io/react-dates/]

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

  • GraphQL queries always return predictable results.
  • While typical REST APIs require loading from multiple URLs, GraphQL APIs get all the data your app needs in a single request.
  • GraphQL uses types to ensure Apps only ask for what’s possible and provide clear and helpful errors.

GraphQL link

The Apollo Platform is a family of technologies you can incrementally add to your stack: Apollo Client to connect data to your UI, Apollo Engine for infrastructure and tooling, and Apollo Server to translate your REST API and backends into a GraphQL schema.

  • Apollo Client Bind data to your UI with the ultra-flexible, community-driven GraphQL client for React, JavaScript, and native platforms.
  • ** Apollo Server** Translate your existing REST APIs and backends into GraphQL with this powerful set of tools for buliding GraphQL APIs.
  • Apollo Engine The GraphQL gateway that provides essential features including caching, performance tracing, and error tracking.

Apollo link

Client applications, have become more and more complex and contains more and more code throughout the years.

In many cases, organisation starts to think about migrating or rewriting their code.

On the server side, many organisation have solved their problem by splitting their monolith into microservices, while on the client side you can actually do the same.

Microfrontend is still fresh. Every organisation builds its own proprietary standard for their solution, nevertheless, the standards and patterns are the same.

We need to build a library / framework that implements general standard we can offer to organisations, to have this architecture implemented much quicker and easier.

Keep

Typescript has become popular once again since Angular team has chosen it as their main programming language. Many of TS standards were embraced in ES2015-ES2018 and it has become heavily used in almost every javascript framework.

React is an old news, but still, it keeps changing and improving. React 16 has many improvements and feature which worth to track.

Among the changes are some long-standing feature requests, including fragments, error boundaries, portals, support for custom DOM attributes, improved server-side rendering, and reduced file size.

The Angular framework, less than two years after it’s first release, have gained strong community and large number of users.

From it’s v2 to v6, angular has proven relatively easy migration path along with frequent major releases which means that it is here to stay.

React Native - building native apps with React.

React Native is the first and most popular paradigm for building native apps with JS. these are not the weak PhoneGap like web apps but real native applications, with real native-like performance.

RN is not dependent on any vendor in order support new native features, it is in sync with the native OS components and is partly what makes it is so strong.

It is also relatively easy to pick up, especially if you worked with React Web before.

it has a vivid community with many users and contributors.

this technology really bridges frontend and mobile in the best way I know of to date.

lastly, RN is used in React VR/AR.

refs:

  • official site : https://facebook.github.io/react-native/
  • popular ui framework - https://react-native-training.github.io/react-native-elements/docs/button.html
  • trends overview - https://trends.google.com/trends/explore?q=react%20native,native%20base,phonegap,%2Fm%2F0gtv959,%2Fm%2F05q31

Webpack (currently in v. 4.6) is is a module bundler. It analyzes a cluster of interdependent modules (mainly js but also other assets) and produces bundles of static items.

Webpack is highly configurable, which sometimes considered to be one its downsides) and have a very rich plugin ecosystem.

In the current version, Webpack introduced

Webpack is de facto a standard for frontend web development in general and build in particular.

Sass and Less are dynamic preprocessor style sheet languages that compile to css. The use of one of both is very wide-spread in the frontend domain, although, with the advancement of css itself (css3, css variables, etc.), more developers are returning to write ‘pure’ css.

Minimalistic approach for managing npm package lifecycle scripts (build, test, deploy, etc’) with cli tools.

Functional Programming is a programming paradigm that become more popular in the recent years, it has many benefits for even-driven architectures and other state/contexts handling/processing problems.

RxJS is an implementation of ReactiveX in Javascript, a library for reactive programming using Observables, to make it easier to compose asynchronous or callback-based code.

ReactiveX is more than an API, it’s an idea and a breakthrough in programming. It has inspired several other APIs, frameworks, and even programming languages.

It has the following principles:

Functional Avoid intricate stateful programs, using clean input/output functions over observable streams.

Less is more ReactiveX’s operators often reduce what was once an elaborate challenge into a few lines of code.

Async error handling Traditional try/catch is powerless for errors in asynchronous computations, but ReactiveX is equipped with proper mechanisms for handling errors.

Concurrency made easy Observables and Schedulers in ReactiveX allow the programmer to abstract away low-level threading, synchronization, and concurrency issues.

Stop

What is CoffeeScript?

CoffeeScript is a little language that compiles into JavaScript. Underneath that awkward Java-esque patina, JavaScript has always had a gorgeous heart. CoffeeScript is an attempt to expose the good parts of JavaScript in a simple way.

The golden rule of CoffeeScript is: “It’s just JavaScript.” The code compiles one-to-one into the equivalent JS, and there is no interpretation at runtime. You can use any existing JavaScript library seamlessly from CoffeeScript (and vice-versa). The compiled output is readable, pretty-printed, and tends to run as fast or faster than the equivalent handwritten JavaScript.

Version 2.3.0 was released on April 29, 2018

** Why should we stop using it? **

With ES6 and ES7, almost everything CoffeeScript brings to the table is already provided by JavaScript, be it Classes, setters & getters, interpolation and template strings, arrow functions, …rest args, destructuring and more.

AngularJS AKA: Angular 1.x is the older version of Angular. we are excluding it from the Radar since it’s time has passed. Today new development most like use much newer frameworks such as: React, Angular 2, or Vue.

What are they?

Grunt and Gulp are Task Runners, they are used to automate the build process and provide a set of tools to accommodate common tasks that are usually part of the build process. They both rely on a config file that describes the tasks that should be performed and on a large ecosystem of plugins.

While Grunt tasks are defined declaratively, Gulp tasks are defined as JS functions

** Why should we stop using them? **

Grunt latest release was on April, 2016 and Gulp on Feb, 2016. With the introduction on Webpack, which does all of the above and adds functionalities like bundling modules and tree shaking, Grunt and Gulp became obsolete. While the Gulp/Grunt approach is to work with files defined in configuration, Webpack does code analysis and builds only what is required.

Yeoman is a scaffolding CLI tool that has many plugins for generating code and projects. Its peak of popularity was when AngularJS and more comprehensive client-side SPA technologies started to emerge and getting started with a new project became relatively difficult with lots of boilerplate code and configurations required. Yeoman made this simple.

Over time React and then Angular released their own scaffolding CLI tools that made yeoman redundant. Another issue with yeoman is the fact that there are a lot of plugins, each of which with a different flavor, dependencies and architecture approach. This makes choosing the right plugin for the job a hassle and many times developers just skip the scaffolding and create the new project from scratch or from a cookie cutter project.

Our opinion is that OO in the manner of classes and inheritance trees and JS don’t mix well in the first place, particularly because all of the gotcha’s around the keyword: this. While arrow-functions help, the problem is still there, just less common and therefore - more of a gotcha. (An evidence for how it’s bad can be drawn from the design decision in later languages not to include it in the first place, e.g. go).

One more confusion is all the discussion about privates - which can be implemented in JS, but using closures, and not as a class feature.

In fact, for the brute majority of use-cases there are better design-patterns that elevate built-in JS power, resulting with a faster and cleaner implementations than using classes, however - there is the minority of cases where you may consider using classes.

Since ES6 classes require less key-strokes, compile behind the scene to ES5 functions, and leverage scopes the way they should be used - there’s no reason to keep working with ES5 functions and prototypes. If you find yourself in that rare usecase where classes will serve you better than closure - use ES6 style.

Start

A set of Kotlin extensions for Android app development. The goal of Android KTX is to make Android development with Kotlin more concise, pleasant, and idiomatic by leveraging the features of the language such as extension functions/properties, lambdas, named parameters, and parameter defaults. It is an explicit goal of this project to not add any new features to the existing Android APIs.

Android KTX saves a lot of boilerplate code, making development speed and productivity increase.

Android KTX is in Preview - Bugs and Stability issues may occur

A collection of libraries that help you design robust, testable, and maintainable apps. Start with classes for managing your UI component lifecycle and handling data persistence.

Historically, Google’s Android documentation examples lacked architecture and structure. This changes with the release of Android Architecture Components, a set of opinionated libraries that help developers create Android applications with better architecture. They address longstanding pain points of Android development: handling lifecycles; pagination; SQLite databases; and data persistence over configuration changes. The libraries don’t need to be used together — you can pick the ones you need most and integrate them into your existing project.

New lifecycle-aware components help you manage your activity and fragment lifecycles. Survive configuration changes, avoid memory leaks and easily load data into your UI using:

LiveData: - Is an observable data holder class which has lifecycle-awareness. This awareness ensures LiveData only updates app component observers that are in an active lifecycle state.

ViewModel: - This class is designed to store and manage UI-related data in a lifecycle conscious way. The ViewModel class allows data to survive configuration changes such as screen rotations.

Room: - A SQLite object mapping library. Room provides compile time checks of SQLite statements and can return RxJava, Flowable and LiveData observables.

Paging: - This library makes it easier for your app to gradually load information as needed from a local or remote data source, without overloading the device or waiting too long for a big database query.

Android P Features and API Overview

It offers the following :

  • new RTT WiFi and location tracking. App must have Location permission and WiFi scanning on.
  • Smart notifications improvements such as smart reply inline from notification bar, adding images to the reply ! Saving the reply as drafts.
  • Multi Camera Support - access 2 camera devices in the same time

Google Wear OS introduced on March 2014 previously known as Android Wear, is Google’s Android version for smartwatches and other wearables.

With Wear OS and embedded Google Assistance, James Bond world has never been so realistic. Check the weather or search for a restaurant by speaking to your watch. Get update with your next meetings and directions, stay connections and even pay using your watch. The future is here! and hey, Wear OS can easily connect with both Android or IPhone.

Wear OS brings new vision to the people that love to be on the edge of what the mobile technology has to offer. If you working out, listening to your music or on your business trip, Google Google Wear OS is the little thing that will take your activity to the next level. And it works both with Android and IPhone.

Check out the Google Wear Homepage

RxJava and RxKotlin (Reactive Extensions) is the one of the most popular library which brings Android development the advantages of the asynchronous and event-based programming.

RxJava is written on the top of the Reactive Stream specification and utilised 3 fundamental building blocks:

  • The Observer Patter
  • The Iterator Pattern
  • Functional Programming

In addition to the above, RxJava provide Schedulers objects which mange the multithreading tasks and allows easy data share across threads without the need to use any other Android framework tools such as Handlers or AsyncTasks.

RxJava2 however, is the enhanced version of the first RxJava written from scratch, and introduces back pressure processing.

With RxJava2 our code becomes much more, Scalable and Maintainable, Event-Driven, Error Resilient and Readable.

The advantages of Event Driven, Reactive-Stream code has been proven already. Therefore, we should strive to use RxJava2 as one of the fundamental part of our program.

The full RxJava2 Javadoc RxJave2 homepage

RxJava

Android, IOS/OSx or web developers!

Have you ever thought animation can be so easy? Do you want to boost your web page or mobile app with cool animation which take you product to the highest standards of user experience? Now, with Lottie by airbnb, animation has never been so easy and cool!

Lottie is a library which developed by airbnb for Android, iOS/MacOs, Web and even ReactNative apps that parses Adobe After Effects animations exported as json with Bodymovin and renders them natively on mobile and web include . For the first time, designers can create and ship beautiful animations without an engineer painstakingly recreating it by hand. They say a picture is worth 1,000 words so here are 13,000

Checkout the Lottie documentation

It looks like that Lottie is the next thing in animation frameworks in the mobile and Web world. As for Android, so far we usually used native animation framework such as Object animators or Value animators API which defined in code or in Android Resource XML, it seems that Lottie will do to animation what Retrofit did to the Http request frameworks.

So far, Lottie is already used in many apps such as GoogleHome, Target and Uber

Average person spends 1 hour per day commuting, while they check their smartphone on average 125 times a day. Cars get us where are we going, while phones keep us connected. Checking your smartphone while driving is distracting, causing 25% of all accidents. Android Auto enables us to keep connected while driving and minimizes distractions. Developing for the Android Auto platform is easy and familiar to Android developers.

It is the same platform you already use for phones, tablets, watches, and more. All these experiences will often be in the same apk. Now it can also extend to the car, in the way that is safer and more efficient to the driver, so they can stay connected, with their hands on the wheel and eyes on the road.

Using Android Auto enabled apps is easy, users download the app to their phone, connect the phone to the car, phone goes into the car mode, and casts the Auto experience to the Android Auto screen. This means even though application is running on the phone, it is displayed on cars dash, and driver can interact with it with car controls, voice and touch screen.

In the addition to that, if specific Android Auto device and car support the link, API is available for CAN bus, protocol that car diagnostic tools use to diagnose the car and get data from car sensors, and optimize its performance. It is available to the Android Auto OEMs and car manufacturers via its C/C++ API, due to the extremely sensitive nature of the API.

Machine learning on mobile devices is offering an array of new and unique user experiences for problems that are close to data people are working on. Experiences that were impossible before, like OCR, image recognition, machine translation, speech recognition, etc, were in the realm of Science Fiction, but they are being brought into our daily lives. Thanks to machine learning on mobile and IoT devices, universal translators from Star Trek, Babel Fish from Hitchhiker’s Guide to the Galaxy, “neural net processor, a learning computer” from Terminator 2, and more, are not a figments of writers’s imagination, but actual products that you can buy, or are in development stages.

TensorFlow is a opensource library for dataflow programming, often used in deep learning. It was developed for internal use by Google Brain team. It was released to the public, and in 2017 mobile/lite version was released that is targeting mobile and embedded/IoT platforms, such as Android, iOS, Rasberry Pi, etc.

Android Things lets you build professional, mass-market products on a trusted platform, without previous knowledge of embedded system design. It reduces the large, upfront development costs and the risks inherent in getting your idea off the ground. When you’re ready to ship large quantities of devices, your costs also scale linearly and ongoing engineering and testing costs are minimized with Google-provided updates.

Android Things extends the core Android framework with additional APIs provided by the Things Support Library, which lets you integrate with new types of hardware not found on mobile devices. Developing apps for embedded devices is different from mobile in a few important ways such as:

  • More flexible access to hardware peripherals and drivers than mobile devices
  • System apps are not present to optimize startup and storage requirements Apps are launched automatically on startup to immerse your users in the app experience.
  • Devices expose only one app to users, instead of multiple like with mobile devices.

SDK

MVVM Model-View-ViewModel is an architectural approach used to abstract the state and behavior of a view, which allows us to separate the development of the UI from the business logic. This is accomplished by the introduction of a ViewModel, whose responsibility is to expose the data objects of a model and handle any of the application’s logic involved in the display of a view.

Google recently released the Android Architecture Components library that makes it easy to integrate this pattern with some more useful components.

Keep

React Native - building native apps with React.

React Native is the first and most popular paradigm for building native apps with JS. these are not the weak PhoneGap like web apps but real native applications, with real native-like performance.

RN is not dependent on any vendor in order support new native features, it is in sync with the native OS components and is partly what makes it is so strong.

It is also relatively easy to pick up, especially if you worked with React Web before.

it has a vivid community with many users and contributors.

this technology really bridges frontend and mobile in the best way I know of to date.

lastly, RN is used in React VR/AR.

refs:

  • official site : https://facebook.github.io/react-native/
  • popular ui framework - https://react-native-training.github.io/react-native-elements/docs/button.html
  • trends overview - https://trends.google.com/trends/explore?q=react%20native,native%20base,phonegap,%2Fm%2F0gtv959,%2Fm%2F05q31

Java is currently the most popular programming language, used for building server-side, desktop and mobile applications. It is core foundation for developing Android apps, making it favorite of many programmers. With its WORA mantra (write once, run anywhere), it is designed to be portable and run across multiple software platforms.

Also, as Java runs under JVM, Java Virtual Machine, many other languages can be complied into JVM bytecode and interoperate with Java code. In addition to that, most of the Android API, example code, documentation, blog posts, forum discussions and various open source projects are written in Java, still making it primary development platform for Android for foreseeable future.

Kotlin is a statically typed programming language that runs on JVM (Java Virtual Machine). Development lead Andrey Breslav has said that Kotlin is designed to be an industrial-strength object-oriented language, and a “better language” than Java, but still be fully interoperable with Java code, allowing companies to make a gradual migration from Java to Kotlin.

At Google I/O 2017, Google announced first-class support for Kotlin on Android. As of Android Studio 3.0 (October 2017) Kotlin is a fully supported programming language by Google on the Android Operating System, and is directly included in the Android Studio 3.0 IDE package as an alternative to the standard Java compiler.

Kotlin is now an official language on Android. It’s expressive, concise, and powerful. Best of all, it’s interoperable with our existing Android languages and runtime.

Besides the stylistic advantages over Java, Kotlin has new and exciting features that are offering performance and stability improvements over Java code, such as Kotlin coroutines, Rx-like collection iteration modifiers, null-safety, operator overloading, etc.

Google Support API

Background Almost any Android application relies on third party services such as: Geo Location, database and storage, networking management, UI tools and widgets.

In addition, as Android framework evolved on every new API, it still required to support old devices which runs older API’s. For this, the Android framework offers the Google Support API libraries which provide services, UI tools and backward compatibility support.

Support Library Uses

  1. Backward compatibility for newer API For example, Android Fragments introduced in API 11 (Android 3.0), if want to use Fragments on older APIS we can use the fragment support library.

  2. Helper Classes The Support library provides helpers classes and tools such as Fragments, the Recycler View and the Constrain Layout

  3. Testing and Debugging Tools Testing and Debugging is a fundamental principle in SW development as well in Android. The Support library offers number of classes beyond our code include the support-annotation library and Testing Support Library that help us to test and debug our app.

Google Support API

RxJava and RxKotlin (Reactive Extensions) is the one of the most popular library which brings Android development the advantages of the asynchronous and event-based programming.

RxJava is written on the top of the Reactive Stream specification and utilised 3 fundamental building blocks:

  • The Observer Patter
  • The Iterator Pattern
  • Functional Programming

In addition to the above, RxJava provide Schedulers objects which mange the multithreading tasks and allows easy data share across threads without the need to use any other Android framework tools such as Handlers or AsyncTasks.

RxJava2 however, is the enhanced version of the first RxJava written from scratch, and introduces back pressure processing.

With RxJava2 our code becomes much more, Scalable and Maintainable, Event-Driven, Error Resilient and Readable.

The advantages of Event Driven, Reactive-Stream code has been proven already. Therefore, we should strive to use RxJava2 as one of the fundamental part of our program.

The full RxJava2 Javadoc RxJave2 homepage

RxJava

Firebase Platform

Firebase is a powerful platform for Mobile Android, iOS and WebApps.

Firebase gives you functionality like analytics, databases, messaging and crash reporting so you can move quickly and focus on your users. Firebase is built on Google infrastructure and scales automatically.

Firebase gives you many services such as: analytics, Auth, Realtime database, Cloud messaging, Storage, Crash reporting and more.

Using One Firebase Console to control the services and data is provided. It is easy to use and setup by using Firebase SDK libraries. Usually, in order to setup most of the services there are wide range of documentation and code examples.

What is new in firebase: Cloud fireStore Cloud functinos

MVP - Model View Presenter

The MVP pattern allows separate the presentation layer from the logic, so that everything about how the interface works is separated from how we represent it on screen. First thing to clarify is that MVP is not an architectural pattern, it’s only responsible for the presentation layer.

The presenter - responsible to act as the middle man between view and model. It retrieves data from the model and returns it formatted to the view. But unlike the typical MVC, it also decides what happens when you interact with the view.

The View - usually implemented by an Activity/Fragment, will contain a reference to the presenter. The only thing that the view will do is calling a method from the presenter every time there is an interface action (a button click for example).

The model - would only be the gateway to the domain layer or business logic. It is enough to see it as the provider of the data we want to display in the view.

MVP makes views independent from our data source. We divide the application into at least three different layers, which let us test them independently. With MVP we are able to take most of logic out from the activities so that we can test it without using instrumentation tests.

Dependency Injection

Dependency Injection is a design pattern which allows developers to write code that has low coupling and which can therefore be easily tested.

Dependency Injection is a technique whereby one object (or static method) supplies the dependencies of another object.

A dependency is an object that can be used (a service). An injection is the passing of a dependency to a dependent object (a client) that would use it.

One of the most powerful and used DI in Android is Dagger which is maintained by Square and Google engineers.

Repository Pattern

You design a single API that abstracts out all of the storage stuff. The repository implementation deals with all of the decision-making for where the data goes, what all has to get updated, what has to be refreshed from some remote source, and so on.

The Repository has a few key roles:

  • Manages Data Storage
  • Normalizes Model Objects
  • Provides a Clean Reactive API
  • Isolates Rest of App from Strategy Changes
  • High-Level Repository Strategies
  • Network + Network API Caching + Presistence (DB)

Why the Repository Pattern?

  • Decouples the application from the data sources
  • Provides data from multiple sources (DB, API) without clients being concerned about this
  • Isolates the data layer
  • Single place, centralized, consistent access to data
  • Testable business logic via Unit Tests
  • Easily add new sources

Reactive programming is relative new paradigm programming. It provides a wide set of API to create an Observable and an Observer. It is short : “The Observer pattern done right - ReactiveX is a combination of the best ideas from the Observer pattern, the Iterator pattern, and functional programming “

Reactive programming offers tools and easy to use API to observe UI changes, perform backend computing async on different threads ( in order not to block the UI or the main thread) and wide range of operators to manipulate the emitted data to the observer. It also provides an excellent error handling and all being done in an easy to understand syntax.

Many distinguished software companies have already adopted it closely.

Stop

RxJava – Reactive Extensions for the JVM

a library for composing asynchronous and event-based programs using observable sequences for the Java VM.

Why stop using it?

  • RxJava 1.X does not handle back-pressure
  • RxJava 1.X will be deprecated soon and developers will have to change to RxJava 2.X at some point.. rather do it now :)
  • RxJava 2.X has been completely rewritten from scratch on top of the Reactive-Streams specification. The specification itself has evolved out of RxJava 1.x and provides a common baseline for reactive systems and libraries.

lets you run Java 8 code with lambda expressions, method references and try-with-resources statements on Java 7, 6 or 5. It does this by transforming your Java 8 compiled bytecode so that it can run on an older Java runtime. After the transformation they are just a bunch of normal .class files, without any additional runtime dependencies.

Why stop using it?

  • Java 8 has been released and it includes lambda expressions.
  • RetroLambda is limited support for backporting default methods and static methods on interfaces.

ListView is a view group that displays a list of scrollable items. The list items are automatically inserted to the list using an Adapter that pulls content from a source such as an array or database query and converts each item result into a view that’s placed into the list.

**Why stop using it? **

  • ListView does not recycler the items by default and will cost more memory processing each view item.
  • RecyclerView is better in performance and manages memory by recycling the views

SQLite was a good way to store/retrieve large sets of Data until object relational mapping (ORM) libraries were introduced. Libraries such as GreenDao, SugarORM, Realm and more.

Today, after almost a year, google have announced the architecture component libraries - which also introduced Room. Room was built on top of Sqlite and makes the CRUD operations very intuitive and easy to use so we no longer need to use SQLite old fashion queries to insert and get the data. Room also supports RxJava and LiveData returned types which makes the code cleaner and more robust.

Bolts is a collection of low-level libraries designed to make developing mobile apps easier. Bolts were designed by Parse and Facebook for our own internal use. Tasks”, which make organization of complex asynchronous code more manageable. A task is a kind of like a JavaScript Promise.

Bolts would be good for background work that returns a single value such as general network requests, reading files off disk, etc There are some disadvantages in this approach like

  • switching between threads.
  • return more than a single value
  • error handling
  • callback hell

It is the good time to stop working with bolts and switch to better approach like reactive programming

why an application needs a good architecture?

A simple answer is that everything should be organized in a proper way. So does your Android application.

If an application is developing with no pattern, the following problems are raised

  • All of the codes are not covered by Unit Tests.
  • It is difficult to debug a class because it contains a huge number of functions.
  • You are unable to keep track of the logic inside that huge class.
  • Another developer finds it so difficult to maintain and add new features to your work.

What are the benefits of a proper architecture?

  • Simplicity
  • Testability
  • Easy maintenance

**What are the popular patterns? **

  • MVP ( Model — View — Presenter)
  • MVVM (Model — View — ViewModel)
  • Clean Architecture

Why should you avoid using an event bus?

One common pattern in Android development is the use of event buses. Libraries like Otto and EventBus are often used to remove boilerplate listeners that require tunneling code through many layers of hierarchy. Although an event bus might seem convenient at first, they quickly devolve into a mess of tangled events that are incredibly hard to follow and even more difficult to debug.

Treating Producers Like Synchronous Getters

This is another common pattern that’s incredibly hard to undo in a larger codebase. Often times many activities or fragments will assume events are produced as soon as that component subscribes to the event bus. They’ll set a field based on the event, and then proceed to use that field in other lifecycle methods with the assumption that it is not null.

Alternatives

No library or tool will fix these problems for free, but some tools and patterns will encourage you to do things the right way.

An event bus, when used correctly, can probably avoid these issues. However, it also encourages these practices more than most tools. Using simple listeners, although they will require more code, will make things much easier to understand.

For more complex scenarios, RxJava provides a great solution

ABOUT THE RADAR

The Radar is a new initiative from Tikal to summarize our usage & opinion about certain technology topics in our client solutions.

Our Radar has 4 domains DevOps, Backend, Frontend and Mobile which are mapped to our main core expertise.

Our Radar has four rings, which are described from the middle:

  • The Start ring is for topics that we think are ready for use, but not as completely proven as those in the Keep ring. So for most organizations we think you should use these on a trial basis, to decide whether they should be part of your toolkit.
  • The Keep ring represents topics that we think you should be keep using now. We don't say that you should use these for every project; any tool should only be used in an appropriate context.
  • The Stop ring is for topics that are getting attention in the industry, but we don't think you should continue using it.
Thank you for your interest!

We will contact you as soon as possible.

Send us a message

Oops, something went wrong
Please try again or contact us by email at info@tikalk.com