AI / ML – Past, Present & Future – Part 2
 

Just a recap, we learnt in my last update, the advent of the industrial revolutions, and the current industrial revolution of the ‘Machines’ we are currently experiencing. The impact of the new age machines that are learning and thus by utilizing AI/ML mould the world we are going to experience. In this excerpt we are going to understand the machine in itself, the raw material that constitutes it and how the world of AI/ML comes alive.

New Machine: A system of intelligence that combines software, hardware data and human input:

  • Software that learns
  • Massive hardware processing power
  • Huge amounts of data

Any New machine exhibiting AI has three main Elements

1. Digital process Logic

  • Transform many manual processes into automated ones
  • Car dispatch process between traditional vs Uber
  • Digitized process multiplied over millions of transactions – an industry is revolutionized – structuring the process is the hardest part
    • thereby new Large Databases that are stable, scalable and tested. For e.g. Hadoop are finding favor against Oracle/SAP

2. Machine Intelligence

  • Combination of algorithms, automation processes, machine learning and neural networks – just a richer data set – HEART
    • thereby Highly efficient and always on plumbing

3. Software Ecosystem

  • Multiple systems of intelligence connected thru API. For e.g. Uber uses Twilio for cloud communications, Google for maps, Braintree for payments, SendGrid for email etc.
    • thereby an Intelligent System in action

So finally what will work for us to run an AI system is a combination of the above which not just a system but a very intelligent system based on new and enhanced learning.

Figure: Intelligent System working (Source: Internet)

Just to illustrate this in a live example for the most Successful Internet media-service provider and Production Company in North America. I have put together the to-be story of a system of intelligence that they built.

There is a big difference between merely having all the necessary ingredients of the new machines and actually getting them to perform at a high level. An intelligent system that can help you be the Michael Phelps of whatever race you are in will have all or most of the below characteristics to make it successful.

  • Learn more than any other system
  • Open to more changes/corrections
  • Not just being automated but also involve human inputs
  • Focused on a confined problem
  • Individual experience has to be top priority
  • Look out for Constantly improving system

Once the intelligent system is in place, you finally need a way to measure whether it is doing the right things or not –

  1. An Intelligent has to become better and better and that depends solely on the ‘Quality of Data’ that is being fed into it
  2. Intelligent systems has to be a journey in an organization and not just an individual contribution
  3. System should take ownership of more and more data analysis and should reduce human intervention

Every day that passes gives us more evidence and strengthens our conviction that the intelligent systems that we are trying to understand in this part are the engines of the fourth industrial revolution. Individuals and companies that are early birds on this bandwagon are the ones that are reaping rich benefits out of solving their major problems. So it’s but obvious that we need to the AI/ML way sooner or later. Are you ready?

Stay tuned…. Part III of this foray, we will quickly look through the making of Data that makes an AI system successful and then dwell upon the Digital Business Models and Solutions: – “Robots” that outline the design and delivery of the AI platform.

AI / ML – Past, Present & Future – Part I
 

The World has seen development/growth primarily driven by the Industrial revolutions. Each of these revolutions changed the way we looked at a time of economic dislocation; when old ways of production become defunct and they had to give way to far better/newer ways of production that could harness the improvement brought in by new machines. The First Industrial revolution was powered by the invention of the loom the second by the steam engine and the third by the assembly line, the fourth however will be powered by the machines that seem to think. We are HERE in the fourth one.

Between each industrial revolution to the next, there is long and bumpy road connecting one era of business and technology to the next, the evolution of each industrial revolution follows the part of an S-curve (as show in figure below)

  1. IDEA BURST : Breakthrough, high concentration in wealth, new industries created, new tech create press clipping but no impact on existing industries
  2. BUMPY ROAD :Revolution stalls, skepticism on value creation in phase I, economic models and value chains created, change in existing industries
  3. MASSIVE LIFT UP : Everybody richly rewarded, National GDP gets vertical lift off, Large wealth distribution

Just to see the impact of such AI driven world that is in front of us, few commonsensical usage of AI in the future world are below.

  • 1/3rd of all food produced go to waste, could be moved to Third world countries by usage of AI to address the hunger prevalent there
  • 12 million Medical misdiagnoses in US only contribute to 4,00,000 deaths. By the right usage of AI, most of these deaths can be avoided
  • Driverless cars are reducing the Annual # of accidents from 4.2 to 3.2 per million miles driven. This will improve as days go by

Now that the machines are in, we need to see what is that we are supposed to expect

  • Technology will be embedded into everything (IoT – Internet of Things)
  • As machines become better, it is but obvious that by year 2030 standards, the current frame work of machines will stink. Advent of improvements on these machines
  • Becoming Digital – mastering the three Ms (raw Materials, new Machines, and business models)

While the above statistics on Job displacement is detrimental to many of human futures, however the pace of elimination will be slow. Consider the following

  • Most likely scenario : 12% elimination in next 10 to 15 years
    • 3 scenarios
      • Job Automation: 12% are at risk
      • Job Enhancement: 75% of existing jobs will be altered
      • Job Creation: 13% net new jobs will get created due to new machine requirements or new job categories

The advent of 13% of new job and the ones that cant automated and enhanced still would need human intervention and keep the true need of humans in place vis-a-vis machines replacing each one of us – Scary isnt it?

Source: internet

So Let me introduce you to some key definitions to keep us on track of this arduous journey

What is AI?

Artificial Intelligence – Coined in 1956 by Dartmouth Assistant Professor John McCarthy, ‘Artificial Intelligence’ (AI) is a general term that refers to hardware or software that exhibits behavour which appears intelligent. In the words of Professor McCarthy, it is “the science and engineering of making intelligent machines, especially intelligent computer programs.”

Other sources terms AI(Artificial Intelligence) as an area of computer science that focuses on machines that learn. There are 3 types of AI prevalent

  • Narrow AI (ANI)/Applied AI: Purpose built and business focus on a specific task. E.g. Driving a car, Reviewing an X-ray, Tracking financial trades
  • General AI (AGI)/Strong AI: pursuit of a machine that has same general human intelligence as a human. E.g. figuring out how to make coffee in an average American home
  • Super AI: 10(or 1000) steps ahead of us. Technical genie – havoc around us

By the way, AI has existed for decades, via rules-based programs that deliver rudimentary displays of ‘intelligence’ in specific contexts. Progress, however, has been limited — because algorithms to tackle many real-world problems are too complex for people to program by hand. To resolve the area of complex problems is the world of ML (Machine Learning)

Machine learning (ML) is a sub-set of AI. All machine learning is AI, but not all AI is machine learning . Machine learning lets us tackle problems that are too complex for humans to solve by shifting some of the burden to the algorithm. As AI pioneer Arthur Samuel wrote in 1959, machine learning is the ‘field of study that gives computers the ability to learn without being explicitly programmed.’

The goal of most machine learning is to develop a prediction engine for a particular use case. An algorithm will receive information about a domain (say, the films a person has watched in the past) and weigh the inputs to make a useful prediction (the probability of the person enjoying a different film in the future).  Machine learning algorithms learn through training. An algorithm initially receives examples whose outputs are known, notes the difference between its predictions and the correct outputs, and tunes the weightings of the inputs to improve the accuracy of its predictions until they are optimized.

Why is AI important?

AI is important because it tackles difficult problems in a way our human brain would have done but much faster and less erroneous- obviously resulting in human well-being. Snce the 1950s, AI research has focused on five fields of enquiry:

  1. Reasoning: the ability to solve problems through logical deduction
  2. Knowledge: the ability to represent knowledge about the world (the understanding that there are certain entities, events and situations in the world; those elements have properties; and those elements can be categorised.)
  3. Planning: the ability to set and achieve goals (there is a specific future state of the world that is desirable, and sequences of actions can be undertaken that will effect progress towards it)
  4. Communication: the ability to understand written and spoken language.
  5. Perception: the ability to deduce things about the world from visual images, sounds and other sensory inputs.

AI has thus already gone past imaginations and already is part of our home, workplace, community and what not. To say it simply, it’s infiltrating all the frameworks that are driving the global economy. From Siri, Alexa, Google Home, to Nest to Uber the world is covered with smart machines which are operating on extremely strong software platforms which in turn are in self learning mode. And I am not sure if it’s the best part or the scary part – This is the just the BEGINNING!!!. I call it scary because these new inventions are always “ready to learn” and constantly “adding intelligence” which will very soon challenge and enhance the intellect and experience of the savviest professionals in every sector.

Stay tuned…. Part II of this foray, we will dwell upon the Raw Materials and New Machines that outline the core of the AI platform.

Choosing between Trunk based development(TBD) vs Feature branching
 

In a development environment like scrum, XP the branching strategy can significantly impact the overall speed of delivery of the product. There has been detailed documentation on the various branching strategies, and you can encounter TBD (Trunk based development), feature branching. In this article,  am covering the various factors that can impact the success or failure of a these branching strategy.

What is feature branching:

In this method, there is a code branch created for each and every feature. In agile based environments, typically, a branch is created for every story in the sprint.

 

What is Trunk based Development (TBD): When a single trunk/branch is used to create all the features of the product, then its called Trunk based development. Often a release branch is created when ever there is a release that is pushed to production.

Following are the various factors to look at before selecting Trunk based vs feature branching.

  • location of team members: when team members are co-located, the trunk is the best bet, helps us to get faster feedback. Also team can just talk about the code changes directly. Since the feedback will be faster in a single trunk, the team need to sync up more frequently. Co-location enables this sync trunk changes to happen frequently and faster. When there is a distributed environment (often in our teams), feature branching works best.

  • speed development environment: In a high speed dev environment, the trunk just shines as there are less and less process overhead, like branching, Pull request, review etc. Feature branching shines well when there is less speed in churning code.

  • size of code base: Insanely higher number of code lines can be better handled using feature branching. But for smaller code base, the cost of creating branches etc can be higher.

In our team meetings, we often talk about the right tools for the problem at hand. And using the right branching strategy is an important decision. We often question ourselves if the strategy is right for the given epic/sprint. If the answer is NO for few sprints, we know, we need to change something!

Please comment and let us know your opinion.

Microservices to Micro-Frontends – Simple explanation
 

Microservices

Microservices is one of the newer concepts and a variant of a Service Oriented Architecture (SOA). Although SOA has been there for almost two decades while the Microservices came into existence in 2012.

Microfrontends

The idea is to have small autonomous services to work together to build a large complex application. The approach focuses on individual business sub-domains and building small services making them easier to maintain, and promotes independently deployable pieces thus ensuring that internal changes in one service do not affect or require the redeployment of other services. In today’s software development landscape, most applications are monolithic and one of the drawbacks of this approach is that business owners get to take a very limited number of decisions in a year (slower response times because of dependencies). For instance, upgrading a product, adding newer functionality that is significant in size, etc within a set of related services requires a concerted effort of all concerned parties to deliver changes in a synchronized manner. Microservices allow you to take more far-reaching business decisions more spontaneously as each microservice works independently and an individual team who is managing that has a good control over the changes. This is also helped by the fact that well-implemented microservices attempt to steer clear of the ‘shared database’ model of development (mentioned later in this post).

The microservices architecture allows each team to decide the technology and infrastructure that works best for them, which may be completely different from other microservices that it interacts with for the very same product. Another aspect of choice is seen when attempting to scale a Monolith application, where you need to scale every component as all components under one product typically run under the same process. This usually reduces flexibility that may require only a small subset of the features to be scaled (eg. performance bottleneck in one piece of a payment processing pipeline). Given such circumstances, scaling a monolithic application as a whole may soon turn into an expensive affair. On the other hand, each set of microservices can (potentially) be scaled independently of the others, thus helping focus resources on areas where the problem truly lies.

I have been reading building microservices by Sam Newman and really liked the 8 keys principles explained by him. In this post, I am attempting to provide a high-level overview of those principles and I highly encourage you to read his book to get a more detailed understanding of these concepts.

You can buy this book by clicking it on below book

The eight key principles are

1. Modeled around business domain 

Focus on your business domain and identify individual subdomains and build services. A good way to start would be to follow the principles of Domain-Driven Design.

2. Culture of automation

Infrastructure automation is what smart companies are focusing on today.  Provisioning a new machine, operating system and service should be automated.

Automation testing and continuous delivery are critical as well so as to deploy/release your software frequently and reliably.

3. Hide implementation details.

Hide your database, and hide your functionality. Each service should have its own database and if shared information is needed from other services, leverage service endpoints designed for the specific subdomain to extract what is expected. While migrating a monolithic application to a microservice(ish) structure, it is often considered easiest to tease apart application level code while leaving the (shared) underlying database as is. There is often a variety of rationale provided for doing this ranging from a lack of confidence in the success of this migration to potential issues faced for data analysis and report aggregation purposes. However, this shared database continues to serve as a source of coupling between the independent services far greater than the decoupling achieved by spinning off the application level services.

4. Decentralize all the things

Focusing on autonomy (giving people as much freedom as possible to do the job in hand), self-service (do you have to create a ticket to provision a machine or you can do it all by your self), shared governance (making architecture work) and avoiding complex messaging is important.

5. Deploy independently

If you have 4 services and all of them have to be deployed together due to dependency then fix that before you create 5th service. Having one service per host makes life very easy for everyone. Docker is getting a lot of traction for such isolation of operating environments.

In such an environment of independent deployments, consumers drive contracts where when you make a change, the consumer service has expectations about not facing challenges when you deploy changes.

When there are co-existing endpoints, for instance in an upgrade scenario, the consumer service would switch to a newer version while the provider continues supporting the existing version for a limited time period this providing some time for other services to migrate without holding the entire system hostage to its changes.

6. Consumer First

As the creator of an API, it is very important that you make your service easy to consume. The documentation plays an important role here.

7. Isolate Failures

Microservice architecture doesn’t automatically make your systems more stable. To the contrary, it makes the overall system more vulnerable to certain types of network and hardware related issues (more points of failure). You should have ways to isolate failures and look for ways to recover such as failover caching and retry logic.

8. Highly observable

It is very important to know what is happening in your system with so many moving parts. Each service may depend on multiple services and vice versa.  There needs to be constant observation and monitoring to ensure the whole integration is smooth.

Microservices often communicate via HTTP/REST (Synchronous) or utilizes Asynchronous protocols like JMS, RabbitMQ etc.  It is perfectly fine and acceptable to use Synchronous protocols for public APIs while when dealing with microservices internal communications, you should go with Asynchronous protocols.

In a typical monolith application when you want to fetch the data to show in search functionality, all you have to do is join multiple tables and present it to a user. If you want to achieve the same using microservices, there is going to be a big performance hit as you need to retrieve it from each and every microservices which may not be a good idea.   In this situation, I would recommend you to go with Elasticsearch. You can display the high-level data coming from one table. When a user clicks on an individual item, you can always go to all microservices. Moreover, you can run them all in parallel on top of it. This can significantly reduce the pain of performance bottlenecks in homegrown solutions.  There are multiple scenarios like this that entice teams and organizations to avoid going with microservices while an easier way out exists. I would be discussing some of the very common challenges and their resolution in details in my upcoming blogs

Micro-Frontends

Microservices architecture gives you enormous benefits when done right. When implementing a microservices architecture, you certainly want to keep your services small.  Most of the companies and teams I have come in contact with having a tendency to do so with the backend, primarily for two reasons.

  1. It is expensive in terms of time and money.
  2. Knowledge gap

Although you are better off going with a monolith application while you would not gain most of the benefits as you cant deploy your backend services independently without the front end.  In such scenarios, not only is application scaling a challenge,  you also need to update the front end whenever an API is deployed with breaking changes.

 

The idea with a Micro-Frontend is to decompose your application into smaller units based on screens representing domain-specific functionality instead of writing large monolithic front-end applications. The front-ends are self-contained and can be deployed independently. SPAs (single page applications) are the best way identified so far to go this route and domain driven architecture helps achieve this to a great extent. You can have backend, frontend, data access layer, and database, everything required for a subdomain in one service. Every piece of the service should be worked by an independent team. Collaboration and communication play an important role here and as long as you adhere to best practices and principles of microservices, you are most likely to get successful and gain maximum out of your product/software.

Benefits of Micro-Frontends

  1. The individual development team can choose their own technology.
  2. The development and the deployment are very quick.
  3. The benefit of microservices can be leveraged in a much better way. The dependency is drastically reduced.
  4. Helps in continuous deployment.
  5. The maintenance and support are very easy as the individual team owns a specific area.
  6. The testing becomes simple as well as for every small change, you don’t have to go and touch the entire application.

Challenges

  1. The UX consistency is an important aspect. The user experience may become a challenge if the individual team goes with their own direction hence there should be some common medium to ensure UX is not compromised.
  2. The dependency needs to be managed properly. The collaboration becomes a challenge at a time. The multiple teams working on one product should be aligned and have a common understanding.

 

Bloated Domain Objects And CQRS
 

Problem of Bloated Domain objects

In business software applications, the domain objects (entities) are used to represent the business domain. As the application grows and adds more business logic, the service layer, mappers and other patterns gets applied. This sector has held promise for many software developing companies, and has often been touted as the future of work. Often this leads to domain object becomes bloated and the related components become huge & un-maintainable.

CQRS solves the common problem of having a bloated Domain objects. The domain objects get bloated largely because of bounded context. The series of contexts which makes developers think that a single domain object is sufficient to handle all the related things. For example, a large Invoice object for handling Invoice, Shipment and handling change of address for customer . But in reality, these contexts (invoicing, shipment and change) need not be related to same Invoice entity.

What is Command, Query Responsibility segregation (CQRS)?

In order to simplify the Domain objects, CQRS proposes to have two types of domain entities.

  • those serving the command (ordering/assertion services)  – For example, SaveCustomer, CreateInvoice, ShipProduct etc
  • those serving a Query (request) – examples include GetCustomerAddress, SearchCustomer etc

With this separation, the complexity (number of fields, methods) of entities used becomes simplified. And hence the Data mapper layers & the service layers becomes more simplified.

Where can I use CQRS?

  • Largely complex system: Applying CQRS on a simple CRUD operation based system is a over kill. When there is a domain heavy system, like banking and financing systems, LOB applications where business logic, lots of boundary conditions are heavy. Where it makes DDD (Domain driven design) provides high value.
  • Situations where you will apply Microservices, Eventual consistency and Event Sourcing. When we have separation of concerns using CQRS, the microservices becomes much simpler to design and maintain. With Event sourcing we are focused on getting the data (query) from other related sources and is what CQRS propagates.

Final words

CQRS is a carefully thought out pattern for simplifying & solving large and complex systems.

50 Tips and Tricks for Web Performance
 

I came across the below video on the web performance improvement tips and tricks. This is fantastic.

In this video, Jatinder talks about Six fundamental principles for improving web application performance. He also talks a lot about how we go about Decreasing CPU time and increasing parallelism on the client machine to achieve faster web performance.

While I went through the video, I captured all the tricks he talks about. And thought will be useful for others while they watch it. Please find below the tricks.

Quickly respond to network request

  • Avoid 3XX Redirections (63% of top websites use redirect)
  • Avoid Meta refresh
  • Minimise Server time for Requests
  • Use Content distribution Networks (CDN)
  • Maximise concurrent connections.
  • Reuse connections – don’t sent connection close.
  • Know your other servers – you are only fast as your weakest link
  • Understand your network timing

Minimise bytes downloaded

  • GZIP Compression Network traffic
  • Persist application resources locally (Windows 8 applications)
  • Cache dynamic resources in the application Cache (html5 app cache)
  • Provide cacheable content
  • Send Conditional request
  • Cache Data requests (jQuery AJAX calls)
  • Standardize File name capitalization convention.

Efficiently structure markup

  • Load pages in latest browser mode
  • Use http header to specify legacy IE mode
  • Link CSS in the top of the page header, never on the bottom.
  • Avoid using @import of hierarchical styles
  • Avoid embedded and inline CSS
  • Only include necessary styles in the page.
  • Always link JS at the end of the page.
  • Avoid linking JS in the header (use the defer & async attribute)
  • Avoid Inline JS
  • Remove Duplicate code (52% of the web have duplicate code
  • Standardise on a single framework

Optimise your media usage

  • Avoid death by too many images
  • If possible use Image Sprites
  • Use png image file format
  • Use Native image resolutions
  • Replace Images with CSS3 Gradients, border Radius
  • Leverage CSS3 Transforms
  • Use Data URI’s for Small Single view images
  • Avoid complex SVG paths
  • Video : Use preview images
  • Minimize media plugin usage
  • Proactively download Future media

Write Fast JavaScript

  • Stick to Integer math (Math.Floor)
  • Minify your JavaScript
  • Initialize JavaScript on Demand
  • Minimize your DOM Interactions
  • Built in DOM methods always more efficient (firstchild, nextsibling methods are faster)
  • Use Selectors for Collection access (document.querySelectorAll)
  • Use .innerHTML to construct your page
  • Batch your markup changes
  • Maintain smaller DOM (less than 1000 elements)
  • JSON always faster than XML (possible myth)
  • Use Native JSON methods
  • Use regular expressions Sparingly

Know what your app is doing

  • Understand JavaScript Timers (setTimeout, SetInterval)
  • Combine Application Timers
  • Ensure dormant timers are not running
  • Align timers to display Frame (16.7)
  • Use window.requestAminationFrame(renderLoop) for Animations
  • Know when your application is visible (document.hidden, Visibilitychange (event))

Conclusion

The web optimisation is not easy and needs exhaustive, deep look and hopefully this check list helps while optimising your pages. Enjoy coding high performing applications. If you have more tips please provide them in the comments.

 

Micro-Services, Eventual Consistency and Event sourcing patterns
 

Microservices is a really becoming a famous architectural pattern that most of the new software written these days, just apply them. One of the most important differentiation between the traditional web services and micro-services pattern is the amount of common stuff across different subject areas. This calls for a discussion on how Eventual Consistency pattern is mandatory for successfully implementing microservices.

The micro frontend if gaining lots of popularity.  You can read about microservices principles and micro frontends at

Microservices and Microfrontends

Micro-Service Pattern

Generally, in a micro-service pattern, the API’s are split into small subject areas. For example for a CRM application, the subject areas are

  • Customer information – like name, address, email, phone
  • Appointment information – which customer, salesperson, when, where
  • Relationship management – sales/manager, what products, interests
  • Campaign data – offers, deals etc

Then micro-services are built for each of the subject areas.  The microservices are logically and physically separated from each other. ie there is no sharing (code, database, component etc) between any of these micro-services of these subject areas. Pictorially its looks something like this.

Applying Eventual Consistency Pattern

In Micro-services, there is no data that is shared across the micro services. In order to synchronize the data across these isolated storages of these services, we need to apply the eventual consistency pattern. You can read more about applying the pattern correctly here. The simpler way we can achieve consistency across these micro-services is through Event Sourcing pattern.

Event Sourcing

Event sourcing is a process of capturing application state changes in the form of events.  An example of events are customer created, customer updated, Deal created, etc.  Other systems listen to these events and they take relevant action. You can read more about event sourcing here.

Conclusion

Event sourcing is the new way of storing changes to systems and help in making micro-services eventually consistent. These patterns together form well maintainable, reliable and scalable systems in the modern world.

 

 

MEAN stack Part 2 – Yeoman and generator-angular-fullstack
 

This is the second post in the Getting started with MEAN series where we start looking at the Node ecosystem for stable web application frameworks for our application. A brief glance through the Express framework showed that express by itself would be time consuming start off with from scratch. On the other hand, the Express.js website links to a number of frameworks built on top of Express. This looked like a good place to start exploring.

Going through the list of frameworks mentioned there, however, we quickly noticed that a number of options were either targeted only (or primarily) towards APIs, lacked sufficient documentation for our needs, or did not appear to have a sufficiently large community behind them. Mind you, the assessment wasn’t entirely objective, as I did have previous experience with generator angular fullstack.

Why generator-angular-fullstack

Generator angular fullstack is a Yeoman generator that provides an application skeleton as well as sub generators that allow the developer to quickly scaffold various UI and API components. The generator supports a number of customizations through questions asked before generating the app. The ones I found most useful were:

Client Side

  • Html templating : HTML, PUG (previously Jade)
  • Angular Router : ngRouter, ui-router
  • Stylesheets : css, stylus, sass, less
  • CSS frameworks : bootstrap (with the option to include UI Bootstrap)

Server Side:

  • Scripts : javascript, Typescript
  • Databases : none, Mongodb, SQL
  • Authentication boilerplate
  • oAuth integration with Facebook, Twitter and Google Plus
  • Socket IO integration

Phew! That was a lot of options built into the generator itself, stuff that teams and products usually like to customize according to their convenience. This by itself is a strong reason to select the framework, because none of the other frameworks that we evaluated came anywhere close in terms of customizability and out of the box functionality. The generator also comes with built in hooks to deploy your application to Heroku or Openshift, although I found that part to be a little broken (more on that later)

A lot of things have changed though, since my last experience. For one, the guys over at generator-angular-fullstack added support for Typescript. Typescript in fact, is a superset of javascript, but brings a number of enhancements (including transpile time type safety) to the table. On the other hand, the generator still works with Angular 1.x, although the alpha version does support Angular 2. But then, working with the alpha version for a project needing quick turnaround times didn’t sound like an exciting thought.

Anyway, coming back to getting started with the application.

Getting Started

Installing Yeoman

 npm install -g yo

Installing the angular fullstack generator and its prerequisites

npm install -g yo gulp-cli generator-angular-fullstack

On a side note, please read node-gyp’s installation guide before going any further to get the application to run successfully. Node-gyp is required to compile native drivers for each platform and on windows, there are a couple of different ways to support the compilation process. Personally, I was able to get option 2 on the link mentioned above working (see: “Option 2: Install tools and configuration manually”)

Initializing the App

Finally! Coming down to the crux of the matter. Initializing the app

Run : yo angular-fullstack

The installer asks a number of questions about customization, including the ones mentioned above, and once the installer has completed, you have a fully functional app (with authentication if you selected it during install) ready for you to work on.

Running the app is as simple as running : gulp serve

You just need to make sure that if you selected a database, it is running on your local box so the node API can connect to it. If the DB is not found, the application will simply crash, but you already knew that would happen 🙂

Stay tuned for the next article about the getting familiar with the code generated by the Yeoman generator