Tecnologia
em360tech (site: em360tech.com “architecture”) |
Tecnologia
em360tech (site: em360tech.com “architecture”) |
TOGAF®—The Open Group Architectural Framework é uma das estruturas de arquitetura empresarial para melhorar a eficiência empresarial. A estrutura ajuda as empresas a definir suas metas e alinhá-las com os objetivos de arquitetura em torno do desenvolvimento de software empresarial.
O TOGAF® tem sido usado por arquitetos corporativos (EAs) como uma linguagem comum para traçar estratégias de desenvolvimento de TI por mais de 25 anos. Ele foi desenvolvido em 1995 para ajudar empresas e arquitetos corporativos a se alinharem em projetos interdepartamentais de forma estruturada para facilitar os principais objetivos de negócios.
Em particular, de acordo com o Open Group Architectural Forum, o objetivo fundamental do TOGAF® é dar suporte às necessidades críticas dos negócios por meio de:
E para garantir que o acima seja implementado de forma sistemática e repetível, um processo personalizável chamado Método de Desenvolvimento Arquitetônico (ADM) TOGAF® pode ser seguido em vários estágios para gerenciar os requisitos de qualquer empreendimento de modernização de TI em larga escala.
Em 2022, o The Open Group atualizou a estrutura e lançou o TOGAF Standard, 10ª edição, que substitui a edição 9.2 anterior. A atualização torna a adoção mais fácil para as organizações implementarem as melhores práticas do TOGAF. O The Open Group afirma que a 10ª edição fornece eficiência aprimorada e navegação mais simples para implementar a estrutura, tornando-a mais acessível e amigável.
O processo ADM do TOGAF® foi projetado especificamente para acelerar o fluxo de trabalho em quatro domínios da arquitetura empresarial:
A estrutura TOGAF não apenas descreve os principais componentes de uma Arquitetura Empresarial por meio de seus quatro Domínios Arquitetônicos, mas também fornece um roteiro claro sobre como criar essa arquitetura. Esse roteiro é conhecido como Método de Desenvolvimento de Arquitetura (ADM) e é um processo sequencial de nove fases. O ADM orienta as organizações no desenvolvimento de sua Arquitetura Empresarial e é descrito na Figura 1.
Ao longo dos nove estágios do processo TOGAF® ADM, esses quatro domínios arquitetônicos se tornam iterativamente desenvolvidos para criar uma arquitetura equilibrada capaz de garantir mudanças organizacionais. Um processo independente do setor, esse método tem como objetivo limitar a adivinhação e promover a maturidade em programas de arquitetura empresarial — tudo isso enquanto acumula repositórios arquitetônicos específicos da empresa para dar suporte a projetos futuros.
O TOGAF® está atualmente na versão 10, e com sua biblioteca em evolução de definições e simbologia vem a luta inevitável para se alinhar à estrutura de forma ágil. Muito disso se deve ao processo abrangente de revisão de conformidade arquitetônica do TOGAF®, uma lista de verificação envolvendo centenas de itens de categorias como gerenciamento de informações e sistemas; hardware e sistemas operacionais; serviços de software e middleware; aplicativos (especificações de negócios, infraestrutura e integração); e gerenciamento de informações.
No entanto, embora a conformidade seja um elemento indispensável da governança arquitetônica, aderir religiosamente aos padrões da estrutura é uma tarefa difícil para qualquer programa de arquitetura empresarial . Como tal, para organizações modernas que desejam manter eficientemente as melhores práticas do TOGAF®, é quase necessário obter a participação das partes interessadas em toda a organização para avaliar e catalogar eficientemente os projetos de TI.
Agile e TOGAF® são de fato capazes de coexistir, mas para que isso aconteça, caminhos colaborativos para padronizar entidades de TI entre equipes devem ser estabelecidos.
A 10ª edição do TOGAF apresenta um novo formato modular que traz vários benefícios. O ramo Conteúdo Fundamental inclui todos os fundamentos da estrutura, tornando mais fácil para as empresas começarem a implementar e aprender o TOGAF. Esta parte da estrutura é a menos provável de mudar significativamente ao longo do tempo . A estrutura modular fornece aos arquitetos corporativos melhor orientação e um processo de navegação mais simples, permitindo que eles usem a estrutura de forma eficaz. A base de conhecimento e a orientação específica do tópico são separadas em partes formais da estrutura, tornando possíveis lançamentos frequentes de material adicional, preservando a estabilidade no conteúdo fundamental.
Além disso, a 10ª edição do TOGAF continua a aprimorar sua capacidade de dar suporte aos aspectos fundamentais da estratégia de negócios, tornando mais útil para os fornecedores fornecer novos recursos e serviços para segmentos de mercado especializados, ao mesmo tempo em que aderem a padrões abertos. A estrutura é projetada para se adaptar às necessidades de negócios em mudança e é um corpo vivo de conhecimento, permitindo que as organizações estabeleçam formas ágeis de trabalho que evoluem continuamente. A 10ª edição do TOGAF demonstra como a estrutura está evoluindo para fornecer valor em tempos de mudança.
Axiom – A statement or proposition which is regarded as being established, accepted, or self-evidently true.
Software architects (like mathematicians) also build theories atop axioms (but the software world is softer than mathematics).
Architects have an important responsibility to question assumptions and axioms left over from previous eras. Each new era requires new practices, tools, measurements, patterns, and a host of other changes.
The industry does not have a good definition of software architecture.
Architecture is about the important stuff… whatever that is ~ Ralph Johnson
The responsibilities of a software architect encompass technical abilities, soft skills, operational awareness, and a host of others.
When studying architecture – keep in mind that everything can be understood in context – why certain decisions were made, was based on the realities of the environment (for example building microservice architecture in 2002 would be inconceivably expensive).
Knowledge of the architecture structure, architecture characteristics, architecture decisions, and design principles is needed to fully understand the architecture of the system.
Expectations of an architect:
All architectures become iterative because of unknown unknowns. Agile just recognizes this and does it sooner.
Iterative process fits the nature of software architecture. Trying to build a modern system such as microservices using Waterfall will find a great deal of friction.
Nothing remain static. What we need is evolutionary architecture – mutate the solution, evolve new solutions iteratively. Adopting Agile engineering practices (continuous integration, automated machine provisioning, …) makes building resilient architectures easier.
Agile methodologies support changes better than planning-heavy processes because of tight feedback loop.
Laws of Software Architecture:
4 main aspects of thinking like an architect:
Frozen Caveman Anti-Pattern: describes an architect who always reverts to their pet irrational concern for every architecture. This anti-pattern manifests in architects who have been burned in the past by a poor decision/unexpected occurrence, making them particularly cautious in the future.
How an architect can remain hands-on coding skills?
Modularity is an organizing principle. If an architect designs a system without paying attention to how the pieces wire together, they end up creating a system that presents myriad difficulties.
Developers typically use modules as a way to group related code together. For discussions about architecture, we use modularity as a general term to denote a related grouping of code: classes, functions, or any other grouping.
Cohesion – refers to what extent the parts of a module should be contained within the same module. It is a measure of how related the parts are to one another.
Abstractness is the ratio of abstract artifacts to concrete artifacts. It represents a measure of abstractness versus implementation. A code base with no abstractions vs a code base with too many abstractions.
Architects may collaborate on defining the domain or business requirements, but one key responsibility entails defining, discovering, and analyzing all the things the software must do that isn’t directly related to the domain functionality — architectural characteristics.
Operational Architecture Characteristics:
Structural Architecture Characteristics
Cross-cutting Architecture Characteristics
Any list of architecture characteristics will be an incomplete list. Any software may invent important architectural characteristics based on unique factors. Many of the terms are imprecise and ambiguous. No complete list of standards exists.
Applications can support only a few of the architecture characteristics we have listed. Firstly, each of the supported characteristics requires design effort. Secondly, each architecture characteristic often has an impact on others. Architects rarely encounter the situation where they are able to design a system and maximize every single architecture characteristics.
Never shoot for the best architecture, but rather the least worst architecture.
Too many architecture characteristics lead to generic solutions that are trying to solve every business problem, and those architectures rarely work because the design becomes unwieldy. Architecture design should be as iterative as possible.
Identifying the correct architectural characteristics for a given problem requires an architect to not only understand the domain problem, but also collaborate with the problem domain stakeholders to determine what is truly important from a domain perspective.
Extracting architecture characteristics from domain concerns: translate domain concerns to identify the right architectural characteristics. Do not design a generic architecture, focus on short list of characteristics. Too many characteristics leads to greater and greater complexity. Keep design simple. Instead of prioritizing characteristics, have the domain stakeholders select the top 3 most important characteristics from the final list.
Translation of domain concerns to architecture characteristics:
Extracting architecture characteristics from requirements: some characteristics come from explicit statements in requirements.
Architecture Katas – in order te become a great architect you need a practice. The Kata exercise provides architects with a problem stated in domain terms (description, users, requirements) and additional context. Small teams work 45 minutes on a design, then show results to the other groups, who vote on who came up with the best architecture. Team members ideally get feedback from the experienced architect abut missed trade-offs and alternative designs.
Explicit characteristics – appear in a requirements’ specification, e.g. support for particular number of users.
Implicit characteristics – characteristics aren’t specified in requirements documents, yet they make up an important aspect of the design, e.g. availability – making sure users can access the website, security – no one wants to create insecure software, …
Architects must remember: there is no best design in architecture, only a least worst collection of trade-offs.
Operational measures: obvious direct measurements, like performance — measure response time. High-level teams don’t just establish hard performance numbers, they base their definitions on statistical analysis.
Structural measures: addressing critical aspects of code structure, like cyclomatic complexity – the measurement for code complexity, computed by applying graph theory to code.
Overly complex code represents a code smell. It hurts almost every of the desirable characteristics of code bases (modularity, testability, deployability, …). Yet if teams don’t keep an eye on gradually growing complexity, that complexity will dominate the code base.
Process measures: some characteristics intersect with software development processes. For example, agility can relate to the software development process, ease of deployment and testability requires some emphasis on good modularity and isolation at the architecture level.
Governing architecture characteristics – for example, ensuring software quality within an organization falls under the heading of architectural governance, because it falls within the scope of architecture, and negligence can lead to disastrous quality problems.
Architecture fitness function – any mechanism that provides an objective integrity assessment of some architecture characteristic or combination of architecture characteristics. Many tools may be used to implement fitness functions: metrics, monitors, unit tests, chaos engineering, …
Rather than a heavyweight governance mechanism, fittness functions provide a mechanism for architects to express important architectural principles and automatically verify them. Developer know that they shouldn’t release insecure code, but that priority competes with dozens or hundreds of other priorities for busy developers. Tools like the Security Monkey, and fitness functions generally, allow architects to codify important governance checks into the substrate of the architecture.
When evaluating many operational architecture characteristics, an architect must consider dependent components outside the code base that will impact those characteristics.
Connascence – Two components are connascent is a change in one would require the other to be modified in order to maintain the overall correctness of the system.
If two services in a microservices architecture share the same class definition of some class, they are statically connascent. Dynamic connascence: synchronous – caller needs to wait for the response from the callee, asynchronous calls allow fire-and-forget semantics in event-driven architecture.
Component level coupling isn’t the only thing that binds software together. Many business concepts semantically bind parts of the system together, creating functional cohesion.
Architecture quantum – an independently deployable artifact with high functional cohesion and synchronous connascence.
Architects typically think in terms of components, the physical manifestation of a module. Typically, the architect defines, refines, manages, and governs components within an architecture.
Architecture Partitioning – several styles exist, with different sets of trade-offs (layered architecture, modular monolith).
Convay’s Law: Organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.
This law suggests that when a group of people deigns some technical artifact, the communication structures between the people end up replicated in the design.
Technical partitioning – organizing components by technical capabilities (presentation, business rules, persistence).
Domain partitioning – modeling by identifying domains/workflows independent and decouples from another. Microservices are based on this philosophy.
Developer should never take components designed by architects as the last words. All software design benefits from iteration. Initial design should be viewed as a first draft.
Component identification flow:
Finding proper granularity for components is one of most difficult tasks. Too fine-grained design leads to too much communication between components, too coarse-grained encourage high internal coupling.
Discovering components:
Monolithic vs Distributed Architecture:
Architecture styles (a.k.a. architecture patterns) – describe a named relationship of components covering a variety of architecture characteristics. Style name, similar to design patterns, creates a single name that acts as shorthand between experienced architects.
Big Ball of Mud – the absence of any discernible architecture structure. The lack of structure makes change increasingly difficult. Problematic testing, deployment, scalability, performance, … Mess because of lack of governance around code quality and structure.
Client/Server – separation of responsibilities – backend-frontend/two-tier/client-server.
Architecture styles can be classified into 2 main types:
The Fallacies of Distributed Computing:
Other distributed considerations:
commit
/rollback
, it is much more difficult todo the same in a distributed system. Distributed systems rely on eventual consistency – this is one of the trade-offs. Transactional SAGAs are one way to manage distributed transactions.The Layered Architecture (n-tiered) – standard for most applications, because of simplicity, familiarity, and low cost. The style also falls into several architectural anti-patterns (architecture by implication, accidental architecture).
Most layered architectures consist of 4 standard layers: presentation, business, persistence, and database.
The layered architecture is a technically partitioned architecture (as opposed to domain-partitioned architecture). Groups of components, rather than being grouped by domain, are grouped by their technical role in the architecture. As a result, any particular business domain is spread throughout all of the layers of the architecture. A domain-driven design does not work well with the layered architecture style.
Each layer can be either closed or open.
The layers of isolation – changes made in one layer of the architecture generally don’t impact/affect components in other layers. Each layer is independent of the other layers, thereby having little or no knowledge of the inner workings of other layers in the architecture. Violation of this concept produces very tightly coupled application with layer interdependencies between components This type of architecture becomes very brittle, difficult and expensive to change.
This architecture makes for a good starting point for most applications whe it is not known yet exactly which architecture will ultimately be used. Be sure to keep reuse at minimum and keep object hierarchies. A good level of modularity will help facilitate the move to another architecture style later on.
Watch out for the architecture sinkhole anti-pattern – this anti-pattern occurs when requests move from one layer to another as simple pass-through processing with no business logic performed within each layer. For example, the presentation layer responds to a simple request from the user to retrieve basic costumer data.
Pipeline (a.k.a. pipes, filters) architecture: Filter -(Pipe)-> Filter -(Pipe)-> Filter -(Pipe)-> Filter
ETL tools leverage the pipeline architecture for the flow and modification of data from one database to another.
The microkernel architecture style (a.k.a plug-in) – a relatively simple monolithic architecture consisting of two components: a core system and plug-in components.
Core system – the minimal functionality required to run the system. Depending on the size and complexity, the core system can be implemented as a layered architecture or modular monolith.
Plug-in components – standalone, independent components that contain specialized processing, additional features, and custom code meant to enhance or extend the core system. Additionally, they can be used to isolate highly volatile code, creating better maintainability and testability within the application. Plug-in components should have no-dependencies between them.
Plug-in components do not always have to be point-to-point communication with the core system (REST or messaging can be used instead). Each plug-in can be a standalone service (or even microservice) – this topology is still only a single architecture quantum due to monolithic core system.
Plug-in Registry – the core system needs to know about which plug-in modules are available and gow to get them. The registry contains information about each plug-in (name, data, contract, remote access protocol). The registry can be as simple as an internal map structure owned by the core system, or as complex as a registry and discovery tool (like ZooKeeper or Consul).
Examples of usages: Eclipse IDE, JIRA, Jenkins, Internet web browsers, …
Problems that require different configurations for each location or client match extremely well with this architecture style. Another example is a product that places a strong emphasis on user customization and feature extensibility.
A hybrid of the microservices, one of the most pragmatic architecture styles (flexible, simpler and cheaper than microservices/even-driven services).
Topology: a distributed macro layered structure consisting of a separately deployed user interface, separately deployed coarse-grained services (domain services) and a monolithic database. Because the services typically share a single monolithic database, the number of services within an application context range between 4 and 12.
Base on scalability, fault tolerance, and throughput – multiple instances of a domain service can exist. Multiple instances require some load-balancing.
Many variants exist within the service-based architecture:
Similarly, you can break apart a single monolithic database, going as far as domain-scoped databases.
Service-based architecture uses a centrally shared database. Because of small number of services, database connections are not usually an issue. Database changes, can be an issue. If not done properly, a table schema change can impact every service, making database changes very costly task in terms of effort and coordination.
One way to mitigate the impact and risk of database changes is to logically partition the database and manifest the logical partitioning through federated shared libraries. Changes to a table within a particular logical domain, impacts only those services using that shared library.
When making changes to shared tables, lock the common entity objects and restrict change access to only the database team. This helps control change and emphasizes the significance of changes to the common tables used by all services.
Service based architecture – one of the most pragmatic architecture styles, natural fit when doing DDD, preserves ACID better than any other distributed architecture, good level of architectural modularity.
A popular distributed asynchronous architecture style used to produce highly scalable and high-performance apps. It can be used for small applications as well as large, complex ones. Made up of decoupled event processing components that asynchronously receive and process events. It can be used as a standalone style or embedded within other architecture style (e.g. event-driven microservices architecture).
2 primary topologies:
ERROR HANDLING: the workflow event pattern – leverages delegation, containment, and repair through the use of a workflow delegate. On error, the event consumer immediately delegates the error to the workflow processor and moves on. The workflow processor tries to figure out what is wrong with the message (rules, machine learning, …), once the message is repaired it can be sent back to the event processor. In case a very problematic error a human agent can determine what is wrong with the message and then re-submit.
Data loss (lost messages) – a primary concern when dealing with asynchronous communication. Typical data-loss scenarios:
Broadcast – the capability to broadcast events without knowledge of who is receiving the message and what they do with it. Broadcasting is perhaps the highest level of decoupling between event processors.
In event-driven architecture, synchronous communication is accomplished through request-reply messaging. Each event channel within request-reply messaging has 2 queues (request + reply queue). 2 primary techniques for implementing request-reply messaging:
In any high-volume application with a large concurrent load, the database will become a bottleneck, regardless of used caching technologies.
The space-based architecture style is specifically designed to address problems involving high scalability, elasticity, and high concurrency issues.
Tuple space – the technique of using multiple parallel processors communicating through shared memory.
High scalability, elasticity, and performance are achieved by removing the central database and leveraging replicated in-memory data grids. Application data is kept in memory and replicated among all active processing units.
Several architecture components that make up a space-based architecture:
Data collision – occurs when data is updated in one cache instance A, and during replication to another cache instance B, the same data is updated by that cache B.The local update to B will be overridden by the old data from cache A, cache A will be overridden by cache B. Data Collision Rate factors: latency, number of instances, cache size.
Distributed cache – better data consistency. Replicated cache – better performance and fault tolerance.
Example usages of space-based architecture: well suited for applications that experience high spikes in user or request volume and apps that have throughput excess of 10k concurrent users – online concert ticketing systems, online auction systems.
This type appeared in the late 1990s when companies were becoming enterprises and architects were forced to reuse as much as possible because of expensive software licenses (no open source alternatives).
Reuse – the dominant philosophy in this architecture.
This architecture in practice was mostly a disaster.
When a team builds a system primarily around reuse, they also incur a huge amount of coupling between components. Each change had a potential huge ripple effect. That in turn led to the need for coordinated deployments, holistic testing and other drags on engineering efficiency.
This architecture manages to find the disadvantages of both monolithic and distributed architectures!
There is no secret group of architects who decide what the next big movement will be. Rather, it turns out that many architects end up making common decisions.
Microservices differ in this regard – it was popularized by a famous blog entry by Martin Fowled and James Lewis.
Microservices Architecture is heavily inspired by the ideas in DDD – bounded context, decidedly inspired microservices. Within a bounded context, the internal parts (code, data schemas) are coupled together to produce work, but they are never coupled to anything outside the bounded context.
Each service is expected to include all necessary parts to operate independently.
Performance is often the negative side effect of the distributed nature of microservices. Network calls take much longer than method calls. It is advised to avoid transactions across service boundaries, making determining the granularity the key to success in this architecture.
It is hard to define the right granularity for services in microservices. If there are too many services, a lot of communication will be required to perform work. The purpose of service boundaries is to capture a domain or workflow.
Guidelines to find the appropriate boundaries:
Microservices Architecture tries to avoid all kinds of coupling – including shared schemas and databases used as integration points.
Once a team has built several microservices, they realize that each has common elements that benefit from similarity. The shared sidecar can be either owned by individual teams or a shared infrastructure team. Once teams know that each service includes a common sidecar, they can build a service mesh – allowing unified control across the infrastructure concerns like logging and monitoring.
2 styles of user interfaces:
Microservices architectures typically utilize protocol-aware heterogeneous interoperability:
For asynchronous communication, architects often use events and messages (internally utilizing an event-driven architecture).
The broker and mediator patterns manifest as choreography and orchestration:
Building transactions across service boundaries violates the core decoupling principle of the microservices architecture. DON’T.
Don’t do transactions in microservices – fix granularity instead.
Exceptions always exist (e.g. 2 different services need vastly different architecture characteristics -> different boundaries), in such situations – patterns exist to handle transaction orchestration (with serious trade-offs).
SAGA – the mediator calls each part of the transaction, records success/failure, and coordinates results. In case of an error, the mediator must ensure that no part of the transaction succeeds if one part fails (e.g. send a request to undo
pending
state.A few transactions across services is sometimes necessary; if it is the dominant feature of the architecture, mistakes were made!
Performance is often an issue in microservices – many network calls, which has high performance overhead. Many patterns exist to increase performance (data caching and replication).
However, one of the most scalable systems yet have utilized this style to great success, thanks to scalability, elasticity and evolutionary.
Additional references on microservices:
Choosing an architecture style represents the culmination of analysis and thought about trade-offs for architecture characteristics, domain considerations, strategic goals, and a host of other things.
Preferred architecture shift over time, driven by:
When choosing an architectural style, an architect must take into account all the various factors that contribute to the structure for the domain design. Architects should go into the decision comfortable with the following things:
Several determinations:
General tip:
Use synchronous by default, asynchronous when necessary
Making architecture decisions involves gathering enough relevant information, justifying the decision, documenting the decision, and effectively communicating the decision to the right stakeholders.
Decision anti-patterns:
Architecturally significant decisions are those decisions that affect [OR]:
Architecture Decision Records – ADRs – short text file describing a specific architecture decision. 5 main sections:
Authors’ recommendation — store ADRs in a wiki, rather than on Git.
ADRs can be used as an effective means to document a software architecture.
Every architecture has risk associated with it — risk involving availability, scalability, data integrity, … — by identifying risks, the architect can address deficiencies and take corrective actions.
The Architecture Risk Matrix – 2D array — the overall impact and the likelihood, each dimension has 3 ratings (low, medium, high). When leveraging the risk matrix to qualify the risk, consider the impact dimension first and the likelihood dimension second.
Risk Assessment – a summarized report of the overall risk of an architecture (the risk matrix can be used to build it).
Risk Storming – a collaborative exercise used to determine architectural risk within a specific dimension (area of risk) — unproven technology, performance, scalability, availability, data loss, single points of failure, security. Risk storming is broken down into 3 primary activities:
Risk storming can be used for other aspects of software development — for example story grooming — story risk, the likelihood that the story will not be completed.
Effective communication becomes critical to an architect’s success. No matter how brilliant an architect’s ideas are, if they can’t convince managers to fund them and developers to build them.
Diagramming and presenting are 2 critical soft skills for architects.
Irrational Artifact Attachment – is the proportional relationship between a person’s attachment to some artifact and how long it took to produce.If you spend a lot of time on something , you may have an irrational attachment to that artifact (proportional to the time invested). Use Agile approach in order to avoid this anti-pattern – create just-in-time artifacts, use simple tools to create diagrams.
Baseline features of a diagram tool:
Diagram Guidelines:
Book recommendation: Presentation Patterns
When preparing a presentation – use different type of transition when changing a topic, use the same transition within a topic.
When presenting, the presenter has 2 presentation channels: verbal and visual. By placing too much text on the slides and then saying the same words, the presenter is overloading one information channel and starving the other.
Using animations and transitions in conjunction with incremental builds (reveal information gradually) allows the presenter to make more compelling, entertaining presentations.
Info-decks – slide decks that are not meant to be projected but rather summarize information graphically, essentially using a presentation tool as a desktop publishing machine. They contain all the information, are meant to be standalone, no need for presenter.
Invisibility – a pattern where the presenter inserts a blank slide within a presentation to refocus attention solely on the speaker (turn of one visual channel).
A software architect is also responsible for guiding the development team through the implementation of the architecture.
Software architect should create and communicate constraints, or the box, in which developers can implement the architecture. Tight boundaries = frustration, loose boundaries = confusion, appropriate boundaries = effective teams.
3 basic types of architect personalities:
Elastic Leadership – https://www.elasticleadership.com — knowing how much control to exert on a given development team, factors to determine how many teams a software architect can manage at once:
3 factors when considering the most effective team size:
An effective architect not only helps guide the development team through the implementation of the architecture, but also ensures that the team is healthy, happy, and working together to achieve a common goal.
Checklists work and provide an excellent vehicle for making sure everything is covered and addressed. The key to making teams effective is knowing when to leverage checklists and when not to. Most effective checklists:
Many items from the checklists can be automated.
Don’t worry about stating the obvious in a checklist. It’s the obvious stuff that’s usually skipped or missed.
Negotiation is one of the most important skills a software architect can have. Effective software architects understand the politics of the organization, have strong negotiation and facilitation skills, and can overcome disagreements when they occur to create solutions that all stakeholders agree on.
“We must have zero downtime”, “I need these features yesterday”, …:
Leverage the use of grammar and buzzwords to better understand the situation
Enter the negotiation wit as many arguments as possible:
Gather as much information as possible before entering into a negotiation
Save this negotiation tactic for last:
When all else fails, state things in terms of cost and time
Does entire system require 99.999% availability or just some parts?:
Leverage the “divide and conquer” rule to qualify demands or requirements
Demonstrate your point with a real-life example:
Always remember that demonstration defeats discussion
Avoid being too argumentative or letting things get too personal in a negotiation — calm leadership combined with clear and concise reasoning will always win a negotiation
Ivory Tower architecture anti-pattern – Ivory tower architects are ones who simply dictate from on high, telling development teams what to do without regard to their opinion or concerns. This usually leads to a loss of respect for the architect and an eventual breakdown of the team dynamics.
When convincing developers to adopt an architecture decision or to do a specific task, provide a justification rather than “dictating from on high”
By providing a reason why something needs to be done, developers will more likely agree with the request. Most of the time, once a person hears something they disagree with, they stop listening. By stating the reason first, the architect is sure that the justification will be heard.
If a developer disagrees with a decision, have them arrive at the solution on their own
Win-win situation: the developer either fail trying and the architect automatically gets buy-in agreement for the architect’s decision or the developer finds a better way to address concerns.
Accidental complexity – we have made a problem hard, architects sometimes do this to prove their worth when things seem too simple or to guarantee that they are always kept in the loop on discussions and decisions. Introducing accidental complexity into something that is not complex is one of the best ways to become an ineffective leader as an architect. An effective way of avoiding accidental complexity is what we call the 4C’s of architecture:
Be pragmatic, yet visionary. Visionary – Thinking about or planning the future with imagination or wisdom. Pragmatic – Dealing with things sensibly and realistically in a way that is based on practical rather than theoretical considerations.
Bad software architects leverage their title to get people to do what they want from them to do. Effective software architects get people to do things by not leveraging their title as architect, but rather by leading through example, not title. Lead by example, not by title.
To lead a team and become an effective leader, a software architect should try to become the go-to person on the team – the person developers go to for their questions and problems. Another technique to start gaining respect as a leader and become the go-to person on the team is to host periodic brown-bag lunches to talk about specific technique or technology.
Too many meetings? Ask for the meeting agenda ahead of time to help quantify if you are really needed at the meeting or not.
Meetings should be either first thing in the morning, right after lunch, or toward the end of the day, but not during the day when most developers experience flow state.
The most important single ingredient is the formula of success is knowing how to get along with people ~ Theodore Roosevelt
An architect must continue to learn throughout their career. Technology breadth is more important to architects than depth.
The 20-Minute Rule – devote at least 20 minutes a day to your career as an architect by learning something new or diving deeper into a specific topic. Spend min. 20 minutes to Google some unfamiliar buzzwords.
Technology Radar: https://www.thoughtworks.com/radar
You can create your won personal technology radar. It helps to formalize thinking about technology and balance opposing decision criteria.
Architects should choose some technologies and/or skills that are widely in demand and track that demand. But they might also want to try some technology gambits, like open source or mobile development.
Architects can utilize social media to enhance their technical breadth. Using media like Twitter professionally, architects should find technologists whose advice they respect. This allows to build a network on new, interesting technologies to assess and keep up with the rapid changes in the technology world.
Knowledge of the architecture structure, architecture characteristics, architecture decisions, and design principles.
Decisions: what is and what is not allowed, rules for a system how it should be constructed. Design principles: guidelines for constructing systems.
Make architecture decisions. Continually analyze the architecture. Keep current with the latest trends. Ensure compliance with decisions. Diverse exposure and experience. Have business domain knowledge. Posses interpersonal skills. Understand and navigate politics.
Everything in software architecture is a trade-off.
Chapter 2: Architectural thinking
In a traditional model the architect is disconnected from the development teams, and as such the architecture rarely provides what it was originally set out to do. Architect defines architecture characteristics, selects architecture patterns and styles, then these artifacts are handed off to the development teams.
Boundaries between architects and developers must be broken down. Unlike the old-school waterfall approaches to static and rigid software architecture, the architecture of today’s systems changes and evolves every iteration. A tight collaboration is essential for the success.
Stuff you know: Python
Stuff you know you don’t know: Deep Learning
Stuff you don’t know you don’t know: 🤷
Architects must make decisions that match capabilities to technical constraints, a broad understanding of a wide variety of solutions is valuable.
Two components mare connascent if a change in one would require the other to be modified in order to maintain teh overall correctness of the system.
Connascence allows us to go beyond the binary of “coupled” and “not coupled”, serving as a tool to measure coupling and describe how bad it is under different levels and kinds.
Static connascence refers to source-code-level coupling – name (multiple entities must agree on the name), type ( multiple entities must agree on the type), meaning (multiple entities must agree on the meaning of particular values), position (multiple entities must agree on the order of the values), algorithm (multiple entities must agree on a particular algorithm).
Dynamic connascence analyzes calls at runtime – execution (order of execution), timing (timing of the execution of multiple components), values (several values relate to one another), identity (several values relate to one another and must change together).
[STATIC] Multiple components must agree on the type of entity.
Identity. Multiple components must reference the same entity. For example when 2 independent components must share and update a common data source.
Name. Multiple components must agree on the name.
Static. Architects have a harder time determining connascence because we lack tools to analyze runtime calls as effectively as we can analyze the call graph.
Chapter 4: Architecture Characteristics Defined
Implicit – appears in requirements, necessary for project success. Domain knowledge required to uncover such characteristics.
Explicit – characteristic listed in the requirements.
Availability, Continuity, Performance, Reliability, Recoverability, Scalability, …
Configurability, Extensibility, Maintainability, …
Accessibility, Authentication, Authorization, Legal, Security, Privacy, …
The ultimate answer for architectural questions: _it depends on …
Chapter 5: Identifying Architectural Characteristics
Over-specifying architecture characteristics may kill the project. Example: The Vasa – a Swedish warship, it was supposed to be magnificent, turned out to be too heavy, too complicated.
Keep the design simple.
True.
Agility, testability, deployability
Scalability – the ability to handle a large number of concurrent users without serious performance degradation.
Elasticity – the ability to handle bursts of requests.
Interoperability, scalability, adaptability, extensibility.
Chapter 6: Measuring and Governing Architecture Characteristics
Overly complex code represents a code smell – it harms virtually every one of the desirable characteristics.
Any mechanism that provides an objective integrity assessment of some architecture characteristic or combination of architecture characteristics. Many tools may be used to implement fitness functions: metrics, monitors, unit tests, chaos engineering, …
Code automatic scalability tests and compare results.
Architects must ensure that developers understand the purpose of the fitness function before imposing it on them.
Chapter 7: Scope of Architecture Characteristics
The architectural quantum is the smallest possible item that needs to be deployed in order to run an application.
4 because each service can be deployed separately.
2 quantas – ordering and a warehouse management, separate databases.
Chapter 8: Component-Based Thinking
Components – the physical manifestation of a module. Components offer a language-specific mechanism to group artifacts together, often nesting them to create stratification. Components also appear as subsystems or layers in architecture, as the deployable unit of work for many event processors.
Technical partitioning – organizing architecture based on technical capabilities (presentation, business, service, persistence).
Domain partitioning – a modeling technique for decomposing complex systems. In DDD the architect identifies domains independent and decoupled from each other. The microservices architecture is based on this philosophy.
Better reflects the kinds of changes that most often occur on projects.
Separation based on technical partitioning enables developers to find certain categories of code base quickly, as it is organized by capabilities.
Arises when the architect incorrectly identifies the database relationships ads workflows in the application, a correspondence that rarely manifests in the real world. This anti-pattern indicates lack of thought about the actual workflows of the application. Components created with entity-trap tend to be too coarse-grained.
Latency is Zero, Bandwidth is Infinite, The Network is Reliable, The Network is Secure, The Topology Never Changes, There is Only One Administrator, Transport Cost is Zero, The Network is Homogenous
Debugging a distributed architecture, Distributed transactions, Contract maintenance and versioning.
Requesting/receiving too much data whereas only a small subset of data is needed — 2000 req x 10kB VS 2000 req x 100kB
Fonte: https://github.com/pkardas/notes/blob/master/books/fundamentals-of-architecture.md#preface-invalidating-axioms
A class should have only one reason to change, so in order to reduce reasons for modifications – one class should have one responsibility. It is a bad practise to create classes doing everything.
Why is it so important that class has only one reason to change? If class have more than one responsibility they become coupled and this might lead to surprising consequences like one change breaks another functionality.
You can avoid these problems by asking a simple question before you make any changes: What is the responsibility of your class / component / micro-service? If your answer includes the word “and”, you’re most likely breaking the single responsibility principle.
Classes, modules, functions, etc. should be open to extension but closed to modification.
Code should be extensible and adaptable to new requirements. In other words, we should be able to add new system functionality without having to modify the existing code. We should add functionality only by writing new code.
If we want to add a new thing to the application and we have to modify the “old”, existing code to achieve this, it is quite likely that it was not written in the best way. Ideally, new behaviors are simply added.
This rule deals with the correct use of inheritance and states that wherever we pass an object of a base class, we should be able to pass an object of a class inheriting from that class.
Example of violation:
class A:
def foo() -> str:
return "foo"
class B(A):
def foo(bar: str) -> str:
return f"foo {bar}"
B is not taking the same arguments, meaning A and B are not compatible. A can not be used instead of B, and B can not be used instead of A.
Clients should not be forced to depend upon interfaces that they do not use. ISP splits interfaces that are very large into smaller and more specific ones so that clients will only have to know about the methods that are of interest to them.
Example of violation:
class Shape:
def area() -> float:
raise NotImplementedError
def volume() -> float():
raise NotImplementedError
2D triangle does not have volume, hence it would need to implement interface that is not needed. In order to solve this, there should be multiple interfaces: Shape and 3DShape.
High-level modules, which provide complex logic, should be easily reusable and unaffected by changes in low-level modules, which provide utility features. To achieve that, you need to introduce an abstraction that decouples the high-level and low-level modules from each other.
Entities must depend on abstractions, not on concretions. It states that the high-level module must not depend on the low-level module, but they should depend on abstractions.
For example password reminder should not have knowledge about database provider (low level information).
“Every piece of knowledge must have a single, unambiguous, authoritative representation within a system”. When the DRY principle is applied successfully, a modification of any single element of a system does not require a change in other logically unrelated elements.
The KISS principle states that most systems work best if they are kept simple rather than made complicated; therefore, simplicity should be a key goal in design, and unnecessary complexity should be avoided.
Each transaction is either properly carried out or the process halts and the database reverts back to the state before the transaction started. This ensures that all data in the database is valid.
A processed transaction will never endanger the structural integrity of the database. Database is always in consistent state.
Transactions cannot compromise the integrity of other transactions by interacting with them while they are still in progress.
The data related to the completed transaction will persist even in the cases of network or power outages. If a transaction fails, it will not impact the manipulated data.
Ensure availability of data by spreading and replicating it across the nodes of the database cluster – this is not done immediately.
Due to the lack of immediate consistency, data values may change over time. The state of the system could change over time, so even during times without input there may be changes going on due to ‘eventual consistency’, thus the state of the system is always ‘soft’.
The system will eventually become consistent once it stops receiving input. The data will propagate to everywhere it should sooner or later, but the system will continue to receive input and is not checking the consistency of every transaction before it moves onto the next one.
In theoretical computer science, the CAP theorem states that it is impossible for a distributed data store to simultaneously provide more than two out of the following three guarantees:
Every read receives the most recent write or an error. Refers to whether a system operates fully or not. Does the system reliably follow the established rules within its programming according to those defined rules? Do all nodes within a cluster see all the data they are supposed to? This is the same idea presented in ACID.
Every request receives a (non-error) response, without the guarantee that it contains the most recent write. Is the given service or system available when requested? Does each request get a response outside of failure or success?
Represents the fact that a given system continues to operate even under circumstances of data loss or system failure. A single node failure should not cause the entire system to collapse.
Database normalisation is the process of structuring a database, usually a relational database, in accordance with a series of so-called normal forms in order to reduce data redundancy and improve data integrity.
To satisfy 1NF, the values in each column of a table must be atomic.
Must be in 1NF + single column primary key (no composite keys).
Must be in 2NF + no transitive functional dependencies.
Transitive Functional Dependencies – when changing a non-key column, might cause any of the other non-key columns to change. For example:
Fonte: https://github.com/pkardas/notes/blob/master/patterns/abbreviations.md
Os **principais algoritmos de Processamento de Linguagem Natural (NLP)** incluem uma variedade de técnicas que permitem a compreensão e manipulação de texto e linguagem humana. Aqui estão alguns dos mais destacados:
### 1. **Modelos Baseados em Regras**
– **Análise Sintática**: Utiliza gramáticas para analisar a estrutura das frases.
– **Análise Semântica**: Interpreta o significado das palavras e frases.
### 2. **Modelos Estatísticos**
– **Naive Bayes**: Um classificador probabilístico que assume independência entre características.
– **Modelos de Markov Ocultos (HMM)**: Usados em reconhecimento de fala e etiquetagem de partes do discurso.
### 3. **Redes Neurais**
– **Word Embeddings**: Técnicas como Word2Vec e GloVe que transformam palavras em vetores de alta dimensão.
– **Redes Neurais Recorrentes (RNNs)**: Especialmente úteis para sequências de texto, como em tradução automática.
– **Long Short-Term Memory (LSTM)**: Um tipo de RNN que pode aprender dependências de longo prazo.
### 4. **Transformadores**
– **BERT (Bidirectional Encoder Representations from Transformers)**: Modelo que captura o contexto de uma palavra com base em todas as outras palavras na frase.
– **GPT (Generative Pre-trained Transformer)**: Modelo que gera texto de forma coerente e contextualizada.
### 5. **Técnicas de Aprendizado de Máquina**
– **Classificação de Texto**: Algoritmos como Support Vector Machines (SVM) e Árvores de Decisão.
– **Análise de Sentimento**: Usada para determinar a polaridade de um texto (positivo, negativo, neutro).
### 6. **Técnicas de Pré-processamento**
– **Tokenização**: Divisão de texto em palavras ou frases.
– **Remoção de Stop Words**: Eliminação de palavras comuns que não adicionam significado (como “e”, “a”, “de”).
– **Stemming e Lemmatization**: Redução de palavras às suas raízes.
Esses algoritmos e técnicas são fundamentais para o desenvolvimento de aplicações em NLP, como chatbots, tradutores automáticos, sistemas de recomendação e muito mais. Se precisar de mais detalhes sobre algum algoritmo específico, fique à vontade para perguntar!
$ wget http://media.sundog-soft.com/es7/shakes-mapping.json $ curl -H 'Content-Type: application/json' -XPUT 127.0.0.1:9200/shakespeare --data-binary @shakes-mapping.json $ wget http://media.sundog-soft.com/es7/shakespeare_7.0.json $ curl -H 'Content-Type: application/json' -XPOST '127.0.0.1:9200/shakespeare/_bulk?pretty' --data-binary @shakespeare_7.0.json $ curl -H 'Content-Type: application/json' -XGET '127.0.0.1:9200/shakespeare/_search?pretty' -d ' { "query" : { "match_phrase" : { "text_entry" : "to be or not to be" } } }'
Answer the following question, with best possibles answers. Question: What’s motivation to join us?
Joining our team can be a transformative experience for individuals looking to grow both personally and professionally. Here are some compelling reasons to consider:
Joining us means becoming part of a dynamic team that’s focused on achieving excellence while fostering a culture of support and growth. We look forward to welcoming you!
A contribuição da IA vem em duas formas principais: generativa e preditiva. No artigo a seguir, exploraremos as distinções entre IA generativa e preditiva, mostrando como cada tipo está moldando o futuro da resolução de problemas em vários campos.
IA generativa é um tipo de inteligência artificial que pode criar novas informações, como texto, imagens, música ou até mesmo vídeos, com base nos dados em que foi treinada. Em vez de apenas analisar ou processar informações existentes, ela gera novas ideias e resultados.
Imagine que você peça a um modelo de IA generativa como o ChatGPT para escrever uma história curta sobre um dragão e uma princesa. A IA usa o que sabe sobre narrativa, personagens e tramas para criar uma história completamente nova. Ela não apenas copia histórias existentes; ela combina ideias de maneiras criativas para gerar algo único.
Em uma aplicação prática, a IA generativa pode ser usada na arte. Por exemplo, um modelo de IA pode se inspirar em milhares de pinturas e criar uma obra de arte totalmente nova que nunca foi vista antes, misturando estilos e técnicas de maneiras inovadoras.
IA preditiva se refere à tecnologia que usa dados, algoritmos e aprendizado de máquina para prever resultados futuros com base em dados históricos. Ela analisa padrões e tendências para fazer suposições fundamentadas sobre o que pode acontecer a seguir.
Por exemplo, imagine uma loja que quer saber quantos sorvetes estocar para o verão. A loja analisa dados de vendas de verões anteriores, incluindo fatores como temperatura, eventos locais e promoções. Usando IA preditiva, a loja analisa esses dados para encontrar padrões, como como dias quentes levam a mais vendas de sorvete.
A IA prevê que em dias em que a temperatura estiver acima de 30°C, as vendas de sorvete aumentarão em 50%. Com base nessa previsão, a loja decide estocar mais sorvete em dias ensolarados, garantindo que eles tenham o suficiente para os clientes sem estocar demais.
Embora ambos os tipos de IA sejam poderosos, eles atendem a propósitos diferentes. Vamos entender as principais diferenças.
Enquanto a IA generativa atrai atenção por seus novos recursos na criação de conteúdo, a IA preditiva continua sendo uma ferramenta poderosa para melhorar a eficiência operacional e gerar economias substanciais de custos em processos de negócios estabelecidos.
A IA preditiva aprimora as operações existentes, levando a melhorias significativas de eficiência. Por exemplo, a UPS, empresa de serviços globais de remessa e logística, economiza US$ 35 milhões anualmente ao otimizar rotas de entrega, enquanto os bancos podem economizar milhões ao prever com precisão transações fraudulentas. Essa tecnologia tem um histórico comprovado de entrega de altos retornos por meio de processos sistemáticos que as empresas já estabeleceram.
A IA preditiva geralmente funciona sem intervenção humana, tomando decisões instantâneas com base na análise de dados. Por exemplo, ela pode aprovar automaticamente transações de cartão de crédito ou otimizar posicionamentos de anúncios em sites. Em contraste, a IA generativa geralmente requer supervisão humana, pois suas saídas precisam ser revisadas quanto à precisão e qualidade, tornando-a menos adequada para tarefas totalmente automatizadas.
Os modelos de IA preditiva são tipicamente muito mais leves e menos intensivos em recursos em comparação aos modelos complexos usados em IA generativa. Enquanto os modelos generativos podem consistir em centenas de bilhões de parâmetros e exigir dados extensos para treinamento, os modelos preditivos geralmente precisam de apenas alguns milhares de parâmetros, tornando-os mais fáceis e baratos de implantar.
A IA generativa e a IA preditiva atendem a propósitos e funções diferentes, fazendo com que uma não seja uma substituição direta da outra. Embora a IA generativa possa aprimorar modelos preditivos (por exemplo, gerando cenários ou simulações com base em previsões), ela não pode substituir totalmente as capacidades analíticas da IA preditiva. Cada uma tem seus pontos fortes e aplicações, e elas podem se complementar em vários campos, mas não são intercambiáveis.
O futuro está em investir corretamente para alavancar a parceria entre IA preditiva e generativa. A IA generativa se destaca na criação de conteúdo e soluções inovadoras, enquanto a IA preditiva se concentra na previsão de tendências e otimização de decisões. Juntas, elas aprimoram as operações comerciais, levando a valor mensurável e ROI aprimorado.
Por exemplo, na área da saúde, a IA preditiva prevê resultados de pacientes, permitindo intervenções oportunas, enquanto a IA generativa pode ajudar a criar planos de tratamento personalizados. Em finanças, a IA preditiva analisa dados de mercado para aprimorar estratégias de negociação, enquanto a IA generativa pode auxiliar na simulação de vários cenários de investimento.
Essa sinergia entre IA generativa e preditiva não apenas simplifica processos e aumenta a lucratividade, mas também promove o engajamento do cliente por meio de experiências personalizadas. As empresas que aproveitam os pontos fortes de ambas as tecnologias podem impulsionar eficiências operacionais, responder às necessidades do mercado rapidamente e manter uma vantagem competitiva.
No cenário em evolução da IA, a integração estratégica de capacidades generativas e preditivas é a chave para desbloquear todo o seu potencial, garantindo que as empresas obtenham retornos imediatos enquanto se preparam para um futuro definido pela inovação da IA.