Big Data Architect
Developer in soul and architect by nature with more than 18 years in the industry. I am always hands-on and the business is my main focus.
- Follow Ran
- Developing Java Classes 15
- NoSQL Database 9
- Spark 2
- Storm 6
- Kafka 8
- Spring 10
- Maven 12
- Rest 15
- HTTP 15
- HDFS 8
- MapReduce 8
- Hadoop 8
- Using threads 12
- SQL 10
- Writing SQL Statements 10
- Implementing Data Access Classes 10
- Creating ERD 12
- Search (Solr/Lucene, ElasticSearch) 3
- Linux OS desktop (end user) / server (administration) experience 10
- Spring Data 3
- EJB / EJB3 15
- JDBC 15
- Scala 4
- Linux OS - power user 10
- JBoss 15
- Tomcat 15
- MySQL 10
- ANT 15
- Shell Script 15
- Multi-threading 15
- AWS 1
- Couchbase 2
- Redis 3
- MongoDB 3
- ElasticSearch 4
- Functional 4
- SparkStreaming 2
- Spark ML 1
Big Data Architect @ AppsFlyer
Mission, to Help the company in scaling the data processes.
- The ETL processes took too much time with dependencies that were not necessary. I helped in reorder the processes and remove redundant tasks of ETLs. Challenge was to define what is platform and what is application and create separation of duties properly. Work on AWS stack with S3, EC2, YARN, Spark, Hadoop, Druid and Airflow. Languages: Java, Scala, Python
Big Data Architect @ Feedvisor
Mission, to build a new scalable Data architecture.
- As the company expended its customer base, the Data collection had to move forward in its ability to scale.
- Add Apache Spark ecosystem and introduce it in several data pipelines for the company.
- Build complete architecture for the big data solution.
- Challenge was to define data architecture, decide about the technical Big Data solution and then to implement the chosen one. * Work on AWS stack with S3, EC2, Spark on EMR.
- Languages: Java, Scala
Big Data Architect @ Playtika
Mission, to add a Apache Spark into the company in order to scale out Data Platform.
- Project collected data from numerous amount of clients and had to incorporate it into one uniform data lake.
- Integrate Apache Spark into the Big Data life-cycle in the company.
- Challenge was to scale out the Data Platform with new scalable technologies. Helped in making the decision about Spark as the Data Pipeline and then implement it with coding.
- Use Technologies of Spark, Kafka, Hadoop, Hive, Couchbase. * Languages: Java, Scala
Java & Big Data Architect @ Fortscale
Mission, to assess the Big Data technologies of the company and improve performance.
- Data Collection of security events is collected into Big Data platforms.
- Task was to validate the technologies and help getting decisions about what to use and how.
- Technologies: Cloudera distribution: Hadoop, hive, Impala, Cloudera Manager.