Programming Languages: Java, Python, C#, JavaScript
Java Frameworks & Libraries: Spring (Integration, Security, Boot, Batch, AOP, JPA, JDBC Template, HATEOAS, GraphQL), JSP, JSF, Hibernate, EJB, JQuery
Cloud Platforms & Services: Google Cloud Platform (Cloud Endpoints, Cloud Functions, Google Kubernetes Engine, Cloud Run, Cloud Build, PUBSUB, Cloud SQL-PostgreSQL, BigQuery, Cloud Storage), Firebase, Openshift, Kibana, ElasticSearch, Splunk
CI/CD Tools: Jenkins, Team City, Git, Helm, Cloud Build
Build and Automation: Gradle, Maven, Docker
Web Services and APIs: REST, SWAGGER, WSDL, WADL, J2EE Web Services
Databases: BigQuery, Oracle, DB2, MongoDB, MySQL, PostgreSQL
Testing and Quality Assurance: Junit, JMeter, Mockito
Communication Protocols & Middleware: Kafka, WebSphere Application Server, WebLogic, XML
Enterprise Tools & Platforms: SharePoint, ODM, BPM (basic)
Methodologies: Agile, Scrum, Extreme Programming, Feature-Driven Development, Rapid Application Development, Systems Development Life Cycle
Personable professional whose strengths include cultural sensitivity and an ability to build rapport with a diverse workforce in multicultural settings, in advance positive company image through public presentations at universities and clients.
Knowledge-hungry learner, eager to meet challenges, confident, hard-working employee who is committed to achieving excellence
Highly motivated self-starter who takes initiative with minimal supervision.
Energetic performer consistently cited for unbridled passion for work, sunny disposition, and upbeat, positive attitude.
Productive worker with solid work ethic who exerts optimal effort in successfully completing tasks.
Highly adaptable, mobile, positive, resilient, patient risk-taker who is open to new ideas.
Resourceful team player who excels at building trusting relationships with customers and colleagues.
Exemplary planning and organizational skills, along with a high degree of detail orientation.
Goal-driven leader who maintains a productive climate and confidently motivates, mobilizes, and coaches employees to meet high performance standards.
Proven relationship-builder with unsurpassed interpersonal skills.
Highly analytical thinking with demonstrated talent for identifying, scrutinizing, improving, and streamlining complex work processes.
Exceptional listener and communicator who effectively conveys information verbally and in writing.
Email: tiago.sllater@gmail.com
LinkedIn: LinkedIn Profile
Youtube: Youtube Channel
In the era of containers (the ''Docker Age'') Java is still on top, but which is better? Spring Boot or Quarkus?
In the era of containers (the "Docker Age") Java still keeps alive, being struggling for it or not. Java has always been (in)famous regarding its performance, most of because of the abstraction layers between the code and the real machine, the cost of being multi-platform (Write once, run anywhere — remember this?), with a JVM in-between (JVM: software machine that simulates what a real machine does).
Nowadays, with the Microservice Architecture, perhaps it does not make sense anymore, nor any advantage, build something multi-platform (interpreted) for something that will always run on the same place and platform (the Docker Container — Linux environment). Portability is now less relevant (maybe more than ever), those extra level of abstraction is not important. Having said that, let's perform a simple and raw comparison between two alternatives to generate Microservices in Java: the very well-known Spring Boot and the not so very well-know (yet) Quarkus.
Who Is Quarkus?
An open-source set of technologies adapted to GraalVM and HotSpot to write Java applications. It offers (promise) a super-fast startup time and a lower memory footprint. This makes it ideal for containers and serverless workloads. It uses the Eclipse Microprofile (JAX-RS, CDI, JSON-P), a subset of Java EE to build Microservices. GraalVM is a universal and polyglot virtual machine (JavaScript, Python, Ruby, R, Java, Scala, Kotlin). The GraalVM (specifically Substrate VM) makes possible the ahead-of-time (AOT) compilation, converting the bytecode into native machine code, resulting in a binary that can be executed natively. Bear in mind that not every feature are available in native execution, the AOT compilation has its limitations. Pay attention at this sentence (quoting GraalVM team): We run an aggressive static analysis that requires a closed-world assumption, which means that all classes and all bytecodes that are reachable at runtime must be known at build time. So, for instance, Reflection and Java Native Interface (JNI) won't work, at least out-of-the-box (requires some extra work). You can find a list of restrictions at here Native Image Java Limitations document.
Who Is Spring Boot?
Really? Well, just to say something (feel free to skip it), in one sentence: built on top of Spring Framework, Spring Boot is an open-source framework that offers a much simpler way to build, configure and run Java web-based applications. Making of it a good candidate for microservices.
Quarkus Image
Let's create the Quarkus application to wrap it later in a Docker Image. Basically, we will do the same thing that the Quarkus Getting Started tutorial does. Creating the project with the Quarkus maven archetype: Dockerfile mvn io.quarkus:quarkus-maven-plugin:1.0.0.CR2:create \ -DprojectGroupId=ujr.combat.quarkus \ -DprojectArtifactId=quarkus-echo \ -DclassName="ujr.combat.quarkus.EchoResource" \ -Dpath="/echo"
At the generated code, we have to change just one thing, add the dependency below because we want to generate JSON content.
Dockerfile
Spring Boot Image
At this point, probably everyone knows how to produce an ordinary Spring Boot Docker image, let's skip the details, right? Just one important observation, the code is exactly the same. Better saying, almost the same, because we are using Spring framework annotations, of course. That's the only difference. You can check every detail in the provided source code (link down below). Dockerfile mvn install dockerfile:build ## Testing it... docker run --name springboot-echo --rm -p 8082:8082 ujr/springboot-echo
Let's launch both containers, get them up and running a couple of times, and compare the Startup Time and the Memory Footprint. In this process, each one of the containers was created and destroyed 10 times. Later on, it was analyzed their time to start and its memory footprint. The numbers shown below are the average results based on all those tests.
Startup Time
Obviously, this aspect might play an important role when related to Scalability and Serverless Architecture. Regarding Serverless architecture, in this model, normally an ephemeral container will be triggered by an event to perform a task/function. In Cloud environments, the price usually is based on the number of executions instead of some previous purchased compute capacity. So, here the cold start could impact this type of solution, as the container (normally) would be alive only for the time to execute its task. In Scalability, it is clear that if it's necessary to suddenly scale-out, the startup time will define how long it will take until your containers to be completely ready (up and running) to answer the presented loading scenario. How much more sudden it is the scenario (needed to and fast), worse can be the case with long cold starts. Well, you may have noticed that it is one more option tested inserted in the Startup Time graph. Actually, it is exactly the same Quarkus application but generated with a JVM Docker Image (using the Dockerfile.jvm). As we can see even the application that it is using a Docker Image with JVM Quarkus application has a faster Startup Time than Spring Boot. Needless to say, and obviously the winner, the Quarkus Native application it is by far the fastest of them all to start it up.
Microservices is an architectural application pattern where loosely coupled, independently deployable services are used. Each service is responsible for a specific business function and communicates with other services over lightweight protocols.
In contrast, a Monolithic architecture is built as a single, tightly-coupled unit where functionalities are deployed together. A change in any module requires redeploying the entire application.
Inter-service communication can be handled through:
Spring Boot is a framework built on top of the Spring Framework, eliminating complex configuration files.
Spring Boot is an extension of Spring that simplifies dependency management and configuration. It provides auto-configuration, embedded servers, and starter dependencies, reducing boilerplate code.
Auto-configuration automatically configures a Spring application based on JAR dependencies in the classpath. For example, with spring-boot-starter-web
, Spring Boot automatically configures a web application environment.
Starters are predefined dependency bundles that provide necessary libraries to set up specific features or modules. For instance, spring-boot-starter-data-jpa
includes dependencies for using JPA with Spring.
Spring Boot supports externalized configuration through .properties
files, .yml
files, environment variables, or command-line arguments, making it easy to change settings for different environments.
The application.properties
or application.yml
file is used to configure application-specific properties such as database connections, server port, and custom configurations.
Custom configurations can be defined with @Configuration
and customized further by injecting property values with @Value
or @ConfigurationProperties
.
Spring Boot Actuator provides operational features such as monitoring and health checking for production-ready applications. Some key endpoints include:
/actuator/health
: Checks the application's health./actuator/metrics
: Provides metrics about application performance./actuator/info
: Displays application information.With the spring-boot-starter-web
dependency, RESTful controllers can be created using the @RestController
annotation, routing requests with @RequestMapping
or @GetMapping
, and returning JSON objects automatically.
Spring Boot includes embedded servers such as Tomcat or Jetty, allowing applications to be packaged as JAR files and run without an external server. The embedded server can be configured with properties such as server.port
or by specifying the server type in pom.xml
.
Spring Security, included via spring-boot-starter-security
, provides default security configurations. Customizations can be made using @EnableWebSecurity
and configuring custom settings for authentication and authorization.
Spring Boot provides version management for common dependencies through its parent POM or spring-boot-dependencies
, ensuring compatible library versions and reducing dependency conflicts.
@SpringBootApplication
AnnotationThe @SpringBootApplication
annotation combines:
@Configuration
: Marks the class as a source of bean definitions.@EnableAutoConfiguration
: Enables auto-configuration.@ComponentScan
: Enables scanning for components in the package.Profiles allow different configurations for different environments (e.g., development, testing, production) and can be activated with spring.profiles.active
in the application properties or via command-line arguments.
Database settings are configured in application.properties
using properties like spring.datasource.url
, spring.datasource.username
, and spring.datasource.password
. Additional JPA settings can be added for Hibernate configuration.
Spring Boot provides default exception handling and allows for global exception management using @ControllerAdvice
and @ExceptionHandler
annotations.
Spring Boot automatically runs SQL scripts like data.sql
on startup. This file can be placed in src/main/resources
to insert data at startup. To control DDL statements, provide a schema.sql
file in the same location.
Spring Data JPA is part of the Spring Data project, offering integration with the Java Persistence API (JPA) to simplify common data access tasks like CRUD operations, pagination, and query execution.
@Query
annotation.JpaRepository
extends CrudRepository
, adding additional functionality like pagination, sorting, flushing persistence context, and batch processing.
Use the @Query
annotation to define custom JPQL queries:
@Query("SELECT e FROM Employee e WHERE e.name = :name")
List<Employee> findEmployeesByName(@Param("name") String name);
The @Transactional
annotation ensures that operations execute within a single transaction, where changes are either fully committed or rolled back, maintaining data consistency.
Pagination and sorting can be managed using the Pageable
and Sort
interfaces in JpaRepository
.
Spring Security is a customizable framework for handling authentication, authorization, and integration with technologies like OAuth2, JWT, and LDAP.
Authentication is managed by delegating the process to an AuthenticationManager
, which works with AuthenticationProvider
to retrieve user details from a UserDetailsService
and verify credentials. Upon successful authentication, an Authentication
object is created and stored in the SecurityContextHolder
.
The UserDetailsService
interface loads user-specific data during authentication. Its loadUserByUsername(String username)
method returns a UserDetails
object containing user credentials and authorities:
@Service
public class MyUserDetailsService implements UserDetailsService {
@Override
public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException {
return new User("john", "{noop}password", Collections.singletonList(new SimpleGrantedAuthority("ROLE_USER")));
}
}
"READ_PRIVILEGE"
or "WRITE_PRIVILEGE"
."ROLE_ADMIN"
or "ROLE_USER"
.Spring Security uses SecurityContextHolder
to store the security context, which contains details of the authenticated user. It typically uses a session to persist the security context across multiple requests.
The PasswordEncoder
interface is used to encode and verify passwords securely. Common implementations include BCryptPasswordEncoder
and NoOpPasswordEncoder
to securely hash passwords before saving them.
Spring Security uses a chain of filters to process authentication and authorization logic, with each filter executed in a specific order.
@Configuration
public class SecurityConfig extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http.csrf().disable()
.authorizeRequests()
.antMatchers("/api/admin/**").hasRole("ADMIN")
.antMatchers("/api/user/**").hasRole("USER")
.and()
.httpBasic();
}
}
Spring Security protects against CSRF by generating a unique token for each session, which must be submitted with each state-changing request (such as POST, PUT, or DELETE).
@Component
public class CustomAuthenticationProvider implements AuthenticationProvider {
@Override
public Authentication authenticate(Authentication authentication) throws AuthenticationException {
String username = authentication.getName();
String password = authentication.getCredentials().toString();
if (validCredentials(username, password)) {
return new UsernamePasswordAuthenticationToken(username, password, new ArrayList<>());
} else {
throw new BadCredentialsException("Invalid Credentials");
}
}
@Override
public boolean supports(Class> authentication) {
return UsernamePasswordAuthenticationToken.class.isAssignableFrom(authentication);
}
}
http.authorizeRequests()
.antMatchers("/public/**").permitAll() // No security on public endpoints
.anyRequest().authenticated();
@Configuration
@EnableWebFluxSecurity
public class SecurityConfig {
@Bean
public SecurityWebFilterChain securityWebFilterChain(ServerHttpSecurity http) {
http
.cors() // Enable CORS
.and()
.csrf()
.csrfTokenRepository(CookieServerCsrfTokenRepository.withHttpOnlyFalse()) // Store CSRF token in a cookie
.and()
.authorizeExchange()
.pathMatchers("/api/public/**").permitAll() // Ignore CORS and CSRF for public APIs
.pathMatchers("/api/test").permitAll() // Allow public access to /api/test
.anyExchange().authenticated(); // Require authentication for other requests
return http.build();
}
@Bean
public CorsWebFilter corsWebFilter() {
CorsConfiguration configuration = new CorsConfiguration();
configuration.setAllowedOrigins(Arrays.asList("http://localhost:3000")); // Set your front-end app URL, ENV VARS are suggested
configuration.setAllowedMethods(Arrays.asList("GET", "POST", "PUT", "DELETE"));
configuration.setAllowedHeaders(Arrays.asList("Authorization", "Content-Type"));
configuration.setAllowCredentials(true);
configuration.setMaxAge(3600L); // Cache preflight response for an hour
UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
source.registerCorsConfiguration("/**", configuration);
return new CorsWebFilter(source);
}
}
@Configuration
@EnableWebSecurity
public class SecurityConfig {
@Bean
public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
http
.csrf().disable() // Disable CSRF because we're stateless
.authorizeHttpRequests(auth -> auth
.requestMatchers("/api/public/**").permitAll() // Public endpoints, no JWT needed
.anyRequest().authenticated() // All other endpoints require authentication
)
.oauth2ResourceServer()
.jwt(); // Use JWT for OAuth2 Resource Server
return http.build();
}
}
Apache Kafka is a distributed streaming platform used to publish, subscribe to, store, and process streams of records in real-time.
Zookeeper: Manages and coordinates Kafka brokers.
Messages in Kafka are stored in topics and divided into partitions for scalability. Each partition is an ordered, immutable sequence of messages that is continually appended to. Kafka guarantees message durability and fault tolerance by replicating partitions across multiple brokers.
An offset is a unique identifier for each message within a partition.
Zookeeper is a distributed coordination service used to manage and coordinate brokers, topics, partitions, and consumer group offsets.
@Configuration
@EnableKafka
@Bean
public ProducerFactory<String, User> producerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class); // For Key
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class); // For Value (POJO)
return new DefaultKafkaProducerFactory<>(config);
}
@Bean
public KafkaTemplate<String, User> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
// Consumer configuration for reading POJO (User class)
@Bean
public ConsumerFactory<String, User> consumerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
config.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id");
config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); // For Key
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class); // For Value (POJO)
config.put(JsonDeserializer.TRUSTED_PACKAGES, "com.package"); // Trust the package containing your POJO
return new DefaultKafkaConsumerFactory<>(config, new StringDeserializer(), new JsonDeserializer<>(User.class));
}
@Bean
public ConcurrentKafkaListenerContainerFactory<String, User> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, User> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
The KafkaProducer is responsible for sending messages to a Kafka topic, while the KafkaConsumer is responsible for reading those messages.
// KafkaProducer Example
private static final String TOPIC = "my_topic";
@Autowired
private KafkaTemplate<String, User> kafkaTemplate;
public void sendUser(User user) {
kafkaTemplate.send(TOPIC, user);
}
// KafkaConsumer Example
@KafkaListener(topics = "user_topic", groupId = "group_id", containerFactory = "kafkaListenerContainerFactory")
public void consumeUser(User user) {
}
User Class Definition:
public class User implements Serializable {
private String id;
private String name;
private String email;
private Address address;
}
Spring Batch is a lightweight, comprehensive framework designed for building robust, large-scale batch processing applications.
ItemReader
reads items, ItemProcessor
processes them, and ItemWriter
writes them in bulk.Example: Read 1000 records from a file, process them, and write them to a database in chunks of 10.
Spring Batch offers various mechanisms for fault tolerance and retries during job execution.
Spring Batch has built-in support for restarting failed jobs. When a job fails, its execution status is stored in the JobRepository
. This allows the job to restart from the failure point without reprocessing the entire dataset.
Job execution in Spring Batch is associated with a Job Instance. If a job fails, it can be restarted with the same job parameters, creating a new Job Execution for the instance.
ExecutionContext
, including the last processed item. Upon job restart, this context allows resuming from the failure point.The Java Streams API provides a powerful tool for processing collections of data using functional programming principles. It allows operations like filtering, mapping, and reducing data without the need for manual loops or iteration.
// Filtering even numbers
List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
List<Integer> evenNumbers = numbers.stream().filter(n -> n % 2 == 0).collect(Collectors.toList());
// Reducing to a single value (sum)
List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5);
int sum = numbers.stream().reduce(0, (a, b) -> a + b);
// Finding the first element that starts with "b"
List<String> words = Arrays.asList("apple", "banana", "cherry", "date");
Optional<String> result = words.stream().filter(word -> word.startsWith("b")).findFirst();
// Processing elements in parallel
List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
numbers.parallelStream().forEach(System.out::println);
// Sorting in reverse order
List<String> names = Arrays.asList("John", "Alice", "Bob", "Charlie");
List<String> sortedNames = names.stream().sorted(Comparator.reverseOrder()).collect(Collectors.toList());
Throwable is the superclass of all errors and exceptions in Java.
OutOfMemoryError
, StackOverflowError
).IOException
, NullPointerException
).Checked Exceptions: Must be caught or declared; they represent predictable problems like IOException
, SQLException
.
Unchecked Exceptions: Do not need to be caught or declared; they indicate runtime issues like NullPointerException
, ArrayIndexOutOfBoundsException
.
Object
└── Throwable
├── Error
│ └── OutOfMemoryError, StackOverflowError, etc.
└── Exception
├── IOException (Checked Exception)
└── RuntimeException (Unchecked Exception)
├── NullPointerException
└── ArrayIndexOutOfBoundsException
Abstract classes and interfaces are used to define templates for classes in Java, but they differ in purpose and implementation.
Abstract Class: Can include method implementations. Subclasses inherit both abstract and concrete methods.
abstract class Animal {
void eat() {
System.out.println("Eating...");
}
abstract void makeSound();
}
class Dog extends Animal {
@Override
void makeSound() {
System.out.println("Woof!");
}
}
Interface: Defines a contract that implementing classes must fulfill. All methods in an interface are abstract by default.
public interface Sender {
void send(File fileToBeSent);
}
public class ImageSender implements Sender {
@Override
public void send(File fileToBeSent) {
}
}
The try-with-resources statement, introduced in Java 7, simplifies resource management by automatically closing resources like files, database connections, and sockets when they are no longer needed. It works with resources that implement the AutoCloseable
interface.
try (ResourceType resource = new ResourceType()) {
// Use resource
} catch (ExceptionType e) {
// Handle exception
}
Java Collections Framework provides a set of classes and interfaces for managing and organizing groups of objects in various structures like Map
and Set
.
LinkedHashMap
since it doesn’t maintain element order.null
values.null
elements.ConcurrentHashMap
SynchronizedHashMap
Vector
Hashtable
CopyOnWriteArrayList
and CopyOnWriteArraySet
Stack
hashCode()
and equals()
methods are crucial for object comparison and storage in hash-based collections.
When adding a key-value pair, hashCode()
finds the bucket, and equals()
checks for duplicate keys within that bucket.
When adding an element, hashCode()
finds the bucket. If the bucket contains elements, equals()
checks for duplicates. If a duplicate is found, it won’t be added.
equals()
, their hashCode()
values should also be the same.equals()
and hashCode()
should be immutable to maintain consistency.hashCode()
should consistently return the same value for the same object state.REST is an architectural style that uses standard HTTP methods (GET, POST, PUT, DELETE) for CRUD operations and commonly exchanges data in JSON format.
Security: Typically relies on HTTPS, OAuth, and JWT for security.
SOAP is a protocol with a strict XML structure, often used for enterprise applications, and can use various transport protocols like HTTP, HTTPS, SMTP, or JMS.
Security: Uses WS-Security, supporting message-level security with encryption and authentication.
REST: Not a protocol, but an architectural style. Relies on standard HTTP methods (GET, POST, PUT, DELETE, etc.). Data formats: JSON (commonly), XML, HTML, plain text, etc.
SOAP: A protocol with strict standards defined by W3C. Operates over protocols like HTTP, SMTP, or others. Data format: Exclusively XML.
REST: Lightweight and simpler to implement. Easy to consume due to its reliance on URLs and standard HTTP.
SOAP: More complex due to strict standards. Requires a SOAP envelope and compliance with its extensive XML schema.
REST: Faster as it uses lightweight formats (e.g., JSON). Less bandwidth-intensive.
SOAP: Slower due to verbose XML messages and strict compliance. More bandwidth required.
REST: More flexible and can work with multiple formats like JSON, XML, etc. Supports multiple data types and communication styles.
SOAP: Limited to XML format for message exchange. Offers less flexibility in terms of data and transport options.
REST: Preferred for public APIs, mobile applications, microservices, and cloud-based services. Suitable for CRUD (Create, Read, Update, Delete) operations.
SOAP: Best suited for enterprise-level applications needing high security and transactional reliability. Common in banking, telecommunication, and financial services where ACID (Atomicity, Consistency, Isolation, Durability) properties are essential.
REST: Relies on HTTPS for basic security. Additional layers like OAuth are required for enhanced security.
SOAP: Built-in security features (WS-Security) for encryption, authentication, and secure transactions. Better suited for applications needing rigorous security.
REST: Stateless by design. Each request is independent, with no session storage on the server.
SOAP: Supports both stateful and stateless operations.
REST: Uses HTTP status codes (e.g., 404 Not Found, 500 Internal Server Error) for error handling.
SOAP: Provides detailed error reporting via its fault element in the XML response.
var
for more concise code.isBlank()
, lines()
, and strip()
.instanceof
.RandomGenerator
interface.Docker is an open-source platform that automates application deployment inside lightweight, portable containers, ensuring consistency across environments. Containers isolate applications and their dependencies, providing a self-contained runtime environment.
Containers are ephemeral, meaning data is lost when a container is removed. To persist data, Docker offers two main options:
Apache Airflow is an open-source platform primarily used for workflow orchestration, particularly popular in data engineering and data science to manage data pipelines and automate tasks.
Airflow supports several executors for scaling and distributing tasks:
Swagger is an open-source framework designed for creating, documenting, and consuming RESTful web services. It provides a standard way to define APIs with tools for designing and documenting APIs interactively.
The OpenAPI Specification (OAS) is an industry standard for defining REST APIs, initially developed by Swagger but now managed independently by the OpenAPI Initiative. Swagger's tools (e.g., Swagger UI, Swagger Editor) use OAS as the format for defining API specifications in JSON or YAML.
@ApiResponse(responseCode = "200", description = "Successful retrieval")
@Tag(name = "User Management", description = "Operations related to user management")
OpenAPI 3.x added improvements over Swagger 2.0:
requestBody
component, whereas Swagger 2.0 included body data in parameters.allOf
, oneOf
, and anyOf
for inheritance and polymorphism:
allOf:
Combines multiple schemas.oneOf:
Requires only one schema to be valid.anyOf:
Allows any of the listed schemas to be valid.Maven is a centralized dependency management system based on the pom.xml
configuration file. Its XML structure is rigid and verbose, which can make complex build configurations harder to manage.
Gradle uses Groovy or Kotlin as its Domain-Specific Language (DSL) for configuration (build.gradle
or build.gradle.kts
for Kotlin). It is more concise and flexible, allowing for easier definition of complex builds with less code. Generally, it is faster than Maven because it supports incremental builds.
Feature | Maven | Gradle |
---|---|---|
Configuration | XML (POM) | Groovy/Kotlin (DSL) |
Approach | Convention over configuration | Convention + flexibility |
Performance | Slower | Faster |
Dependency Management | Rigid, centralized | Dynamic, flexible |
Multi-Project Builds | Less flexible | Highly flexible |
Build Script | Verbose XML | Concise DSL |
Use Cases | Enterprise Java, standard builds | Android, modern JVM projects |
A Pod is the smallest deployable unit and a single instance of a running process that can host multiple containers.
Kubernetes Service acts as a discovery proxy for Pods to be exposed as a network service:
A Deployment defines a desired state for Pods and ReplicaSets, ensuring that a specified number of Pod replicas are running at any given time.
A ReplicaSet is responsible for maintaining a stable set of replica Pods running at any given time.
A DaemonSet ensures that a copy of a Pod runs on every node (or on selected nodes).
A StatefulSet is used to manage stateful applications. It is useful for databases and other applications requiring persistent storage, unique network identifiers, and ordered deployment or scaling. Each Pod in a StatefulSet has a unique, persistent identity and storage.
A Kubernetes Volume provides storage for data that containers in a Pod can access:
A ConfigMap is used to store configuration data in key-value pairs.
Secrets are used to store sensitive information such as passwords, OAuth tokens, or SSH keys.
Namespaces provide a way to divide cluster resources between multiple users or teams, useful for creating virtual clusters to organize resources and workloads.
Ingress is an API object that manages external access to services within a cluster, typically HTTP/S.
Kubelet is an agent that runs on each node in Kubernetes, ensuring that containers described in PodSpecs are running and healthy. It communicates with the control plane (API server) to report node status, monitor Pods, and execute container workloads.
Kube-proxy is a network proxy that runs on each node and handles network communications both within the cluster and between external clients and services. It manages network rules on each node to enable communication between Pods or expose services outside the cluster.
The Controller Manager is a daemon responsible for running various controllers that regulate the state of the cluster:
The Scheduler is responsible for assigning Pods to nodes based on resource availability, node constraints, and other factors, ensuring efficient and balanced workload distribution across nodes. It considers Pod requirements such as CPU, memory, node labels, and affinity rules.
The API Server is the central management component in Kubernetes, providing the main entry point to the control plane. It handles REST API requests for creating, updating, and deleting Kubernetes resources.
Etcd is a distributed, consistent key-value store used by Kubernetes to store all cluster state data. It ensures that the cluster state is saved and highly available, making it critical for the correct operation of the Kubernetes control plane.
HPA automatically scales the number of Pods in a deployment based on observed CPU/memory or other custom metrics, optimizing resource utilization by adjusting the number of replicas to match the workload.
Pods are the smallest deployable units and hold containers. Services provide a stable network identity for accessing Pods. Deployments help manage the desired state of Pods, including rolling updates and scaling. Volumes and PersistentVolumes manage storage for Pods. ConfigMaps and Secrets provide configuration and sensitive data to applications. Kubelet, Kube-proxy, and other control plane components ensure that the desired cluster state is enforced.
The NetworkPolicy specifies which pods are allowed to communicate with other pods or external entities and can control ingress (incoming) and egress (outgoing) traffic. It operates at the network level, blocking or allowing traffic based on IP addresses, pod selectors, or namespace selectors.
Consider the following scenario:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: namespace-b
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: namespace-a
podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 80
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-egress
namespace: namespace-a
spec:
podSelector:
matchLabels:
app: frontend
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
name: namespace-b
podSelector:
matchLabels:
app: backend
ports:
- protocol: TCP
port: 80
To access the backend service from the frontend service, use the following format:
<service-name>.<namespace-name>.svc.cluster.local
Example:
http://backend.namespace-b.svc.cluster.local:80
Continuous Integration (CI) is the practice of automating the integration of code changes from multiple developers into a single codebase several times a day.
The CI server runs automated unit tests, static code analysis, and sometimes even builds the application.
In Continuous Deployment, code is always ready to be deployed to production at any time, focusing on creating a releasable artifact for every change.
client server databases air harder to move because there is more logic provided by historic procedures that needs to be verified and tested
Three tier or into your applications have most of the business logic in the application code.
So there is less testing required in the database here also there tend to be fewer dependents.Because the clients connected the application server, which in turn connects to the database.
Service oriented applications can be the easiest to move to the cloud because the database and all its details are hidden behind the service layer..
The service can be used to synchronize the source and target databases during the migration.
The assessment phase is where you determine the requirements and dependencies to migrate your apps to Google Cloud.
The assessment phase is crucial for the success of your migration.
You need to gain deep knowledge about the apps you want to migrate, their requirements, their dependencies, and your current environment.
First, building a comprehensive inventory of your apps.
Second, cataloging your apps according to their properties and dependencies.
Third, is training and educating your teams on Google Cloud.
Fourth, building an experiment and proof-of-concept on Google Cloud.
Fifth, calculating the total cost of ownership of the target environment.
Finally, six, choosing the workloads that you want to migrate first.
A comprehensive list of the use cases that your app support, including uncommon ones and corner cases.
All the requirements for each use case, such as performance and scalability requirements.
Expected consistency guarantees, fail over mechanisms, and network requirements.
A potential list of technologies and products that you want to investigate and test.
A POC will help you calculate the total cost of ownership of a Cloud solution.
When you have a clear view of the resources you need in the new environment, you can build a total
cost of ownership model that lets you compare your cost on Google Cloud with the cost of your current environment.
Optimization is an ongoing process of continuous improvement.
You optimize your environment as it evolves to avoid uncontrolled and duplicative efforts, you can set measurable optimization goals and stop when you meet those goals.
They are scheduled maintenance, continuous replication, split reading and writing, and data access microservice.
Use scheduled maintenance if you can tolerate some down time.
Define a time window when the database and applications will be unavailable.
Migrate the data to the new database, then migrate client connections.
Lastly, turn everything back on.
Continuous replication uses your database's built-in replication tools to synchronize the old database to the new database.
This is relatively simple to set up and can be done by your database administrator.
There are also third party tools like stream that will automate this process.
Eventually, you will move the client connections from the old database to the new one.
Then you can turn off the replication and retire the old site.
With split reading and writing, the clients read and write to both the old and new databases for some amount of time.
Eventually, you can retire the old database.
Obviously, this requires code changes on the client.
You would only do this when you are migrating to different types of databases, for example, if you're migrating from Oracle to Spanner.
If you're migrating from Oracle on-premises to Oracle on Google Cloud Bare Metal Solution, continuous replication would make more sense.
All data access is encapsulated or hidden behind the service.
First, migrate all client connections to the service.
The service then handles migration from the old to the new database.
Essentially, this makes split reading and writing seamless to the clients.
Use scheduled maintenance if you can tolerate some down time.
Lifting shift means you're moving an application or database as is, into your new Cloud environment.This is often an easy and effective way of migrating Databases and other applications as well.
Monolithic applications like a WordPress site, for example, might be good candidates for a lift and shift approach.
Create an image of the Virtual Machine in the current environment. Then export the image from the current environment and copy it to a Google Cloud Storage bucket.
Create a Compute Engine image from the exported image. Once you have a Compute Engine image, use it to create your virtual machine.
Machines are moved very quickly and then there data is streamed into Google Cloud before it becomes life
Identify a set of VMs to migrate first, prepare the VMs, migrate them, and then test to verify that they are migrated correctly
assessment tool that helps identify dependencies and dependence and agreed recommend target services and databases based on the analysis.
is an online database Migration tool.
Perform a SQL backup on the source database, then copy the backup files into Google Cloud.
Then run a restore on the new target database server.
Switch databases with no downtime, Using Database replication can minimize downtime.
First, configure the existing database as main.
Second, create the new database and configure it as the replica.
Third, the main synchronizes the data with the replica.
Fourth, migrate the clients to the replica and promote it to the main.
------------------------------------
Migrate large numbers of clients with no downtime by using a data access service.
First, create a service that encapsulates all data access.
Second, migrate clients to use the service, rather than connecting to the database.
Third, once all clients are updated, the service is the only direct database client.
Fourth, replicate the database and then migrate the service connection.
Blue/Green deployments to migrate data access services from on-premises to the cloud, reduce the risk of a migration by allowing quick revert back to the older service.