Friday, January 18, 2019

FaaS tutorial 2: Set up Google Cloud Function

Now that we have deployed an app in FaaS tutorial 1: Start with Firebase and prepare the ground, time to spice up 🌢️ our basic app to add some back-end stuff.

What about defining a REST API to add a new record to the database. We'll use HTTP triggered functions. There are different kind of triggers for different use cases, we dig into that in the next post.

Let's start out tutorial, as always step by steps πŸ‘£!

Step 1: init function

For your project to use Google Cloud Functions (GCF), use firebase cli to configure it. Simply run the command:
$ firebase init functions

     ######## #### ########  ######## ########     ###     ######  ########
     ##        ##  ##     ## ##       ##     ##  ##   ##  ##       ##
     ######    ##  ########  ######   ########  #########  ######  ######
     ##        ##  ##    ##  ##       ##     ## ##     ##       ## ##
     ##       #### ##     ## ######## ########  ##     ##  ######  ########

You're about to initialize a Firebase project in this directory:

  /Users/corinne/workspace/test-crud2


=== Project Setup

First, let's associate this project directory with a Firebase project.
You can create multiple project aliases by running firebase use --add, 
but for now we'll just set up a default project.

? Select a default Firebase project for this directory: test-83c1a (test)
i  Using project test-83c1a (test)

=== Functions Setup

A functions directory will be created in your project with a Node.js
package pre-configured. Functions can be deployed with firebase deploy.

? What language would you like to use to write Cloud Functions? JavaScript
? Do you want to use ESLint to catch probable bugs and enforce style? No
✔  Wrote functions/package.json
✔  Wrote functions/index.js
✔  Wrote functions/.gitignore
? Do you want to install dependencies with npm now? Yes
Below the ascii art 🎨 Firebase gets chatty and tells you all about it's doing.
Once you've selected a firebase project (select the one we created in tutorial 1 with the firestore setup), you use default options (JavaScript, no ESLint).

Note: By default, GCF runs on node6, if you want to enable node8, in your /functions/package.json add the following json at root level:
"engines": {
    "node": "8"
  }
You will need node8 for the rest of the tutorial as we use async instead of Promises syntax.
Firebase has created a default package with an initial GCF bootstrap in functions/index.js.

Step 2: HelloWorld

Go to functions/index.js and uncomment the helloWorld function
exports.helloWorld = functions.https.onCall((request, response) => {
 response.send("Hello from Firebase!");
});
This is a basic helloWorld function, we'll use just to get use to deploying functions.

Step 3: deploy

Again, use firebase cli and type the command:
$ firebase deploy --only functions
✔  functions[helloWorld(us-central1)]: Successful update operation. 
✔  Deploy complete!

Please note that it can take up to 30 seconds for your updated functions to propagate.
Project Console: https://console.firebase.google.com/project/test-83c1a/overview
Note you call also deploy just our functions by adding firebase deploy --only functions:myFunctionName.
If you go to the Firebase console and then in the function tab, you will find the URL where your function is available.



Step 3: try it

Since it's an HTTP triggered function, let's trying with curl:
$ curl https://us-central1-test-83c1a.cloudfunctions.net/helloWorld
Hello from Firebase!
You've deployed and tried your first cloud function. πŸŽ‰πŸŽ‰πŸŽ‰
Let's now try to fulfil the same use-case as per tutorial 1: We want an HTTP triggered function than insert 2 fields in a database collection.

Step 4: onRequest function to insert in DB

  • In function/index.js add the function below:
    const admin = require('firebase-admin');
    admin.initializeApp(); // [2]
    
    exports.insertOnRequest = functions.https.onRequest(async (req, res) => {
      const field1 = req.query.field1; // [2] 
      const field2 = req.query.field2;
      const writeResult = await admin.firestore().collection('items').add({field1: field1, field2: field2}); // [3]
      res.json({result: `Message with ID: ${writeResult.id} added.`}); // [4]
    });
    

    • [1]: import the Firebase Admin SDK to access the Firestore database and initialize with default values.
    • [2]: extract data from query param.
    • [3]: add the new message into the Firestore Database using the Firebase Admin SDK.
    • [4]: send back the id of the newly inserted record.
  • Deploy it with firebase deploy --only functions. This will redeploy both functions.
  • Test it by curling:
    $ curl https://us-central1-test-83c1a.cloudfunctions.net/insertOnRequest\?field1\=test1\&field2\=test2
    {"result":"Message with ID: b5Nw8U3wraQhRqJ0vMER added."}
    
Wow! Even better, you've deployed a cloud function that does something πŸŽ‰πŸŽ‰πŸŽ‰

Note thatif your use-case is to call a cloud function from your UI, you can onCall CGF. Some of the boiler plate around security is taken care for you. Let's try to add an onCall function!

Step 5: onCall function to insert in DB

  • In function/index.js add the function below:
    exports.insertOnCall = functions.https.onCall(async (data, context) => {
      console.log(`insertOnCall::Add to database ${JSON.stringify(data)}`);
      const {field1, field2} = data;
      await admin.firestore().collection('items').add({field1, field2});
    });
    
  • Deploy it with firebase deploy --only functions. This will redeploy both functions.
  • Test it in your UI code. In tutorial 1, step5 we defined a Create component in src/component/index.js, let's revisit the onSumit method:
    onSubmit = (e) => {
        e.preventDefault();
        // insert by calling cloud function
        const insertDB = firebase.functions().httpsCallable('insertOnCall'); // [1]
        insertDB(this.state).then((result) => { // [2]
          console.log(`::Result is ${JSON.stringify(result)}`);
          this.setState({
            field1: '',
            field2: '',
          });
          this.props.history.push("/")
        }).catch((error) => {
          console.error("Error adding document: ", error);
        });
      };
    

    In [1] we passed the name of the function to retrieve a reference, we simply call this function in [2] with json object containing all the fields we need.

Where to go from here?

In this tutorial, you've get acquainted with google function in its most simple form: http triggered. To go further into learning how to code GCF, the best way is to look at existing code: the firebase/functions-samples on GitHub is the perfect place to explore.
In next tutorials we'll explore the different use-cases that fit best a cloud function.

Stay tuned!

Thursday, January 17, 2019

FaaS tutorial 1: Start with Firebase and prepare the ground

As being an organiser of RivieraDEV, I was looking for a platform to host our CFP (call for paper). I've bumped into the open source project conference-hall wandering on twitter (the gossip 🐦 bird is useful from time to time).

The app is nicely crafted and could be used free, even better I've learned afterward, there is an hosted version! That's the one I wanted to use but we were missing one key feature: sending email to inform speaker of the deliberations and provide a way for speakers to confirm their venue.

πŸ’‘πŸ’‘πŸ’‘Open Source Project? Let's make the world 🌎 better by contributing...

On a first look, conference-hall is a web app deployed on Google Cloud Platform. The SPA is deployed using Firebase tooling and make use of firestore database. By contributing to the project, I get acquainted to Firebase. Learning something new is cool, sharing it is even better πŸ€— 🀩

Time to start a series of blogs post on the FaaS subject. I'd like to explore Google functions as a service but also go broader and see how it is implemented in open source world.

In this first article, I'll share with your how to get started configuring a project from scratch in Firebase and how to deploy it to get a ground project to introduce cloud functions in the next post. Let's get started step by step...

Step 1️⃣: Initialise firebase project

Go to Firebase console and Create a firebase project, let's name it test

Step 2️⃣: Use Firestore

  • In left-hand side menu select Database tab, then click Create Database. Follow Firestore documentation if in trouble. The Firebase console UI is quite easy to follow. Note Firestore is still beta at the time of writing.
  • Choose Start in test mode then click Enabled button.
You should be forwarded to the Database explorer, you can now add a new collection items as below:

Step 3️⃣: Boostrap app

We use create-react-app to get an initial react app
npx create-react-app test-crud
cd test-crud
npm install --save firebase
and then we've added firebase SDK.

Insert firebase config

  • We use react-script env variable support
  • In env.local copy variable from firebase console
  • In src/firebase/firebase.js, read env variable and initialise
    const config = {
      apiKey: process.env.REACT_APP_API_KEY,
      authDomain: process.env.REACT_APP_AUTH_DOMAIN,
      databaseURL: process.env.REACT_APP_DATABASE_URL,
      projectId: process.env.REACT_APP_PROJECT_ID,
      storageBucket: process.env.REACT_APP_STARAGE_BUCKET,
      messagingSenderId: process.env.REACT_APP_MESSAGING_SENDER_ID,
    };
    firebase.initializeApp(config);
    
    firebase.firestore().settings(settings);
    
This way you keep your secret safe, not committed in your code 🀫🀫🀫

Step 4️⃣: Add routing

npm install --save react-router-dom
mkdir src/components
touch src/components/create.js
And define route in src/index.js
ReactDOM.render(
  <Router>
    <div>
      <Route exact path='/' component={App} />
      <Route path='/create' component={Create} />
    </div>
  </Router>,
  document.getElementById('root')
);
In the root path, we'll display the list of items. In the the Create component we'll define a form component to add new items to the list.

Step 5️⃣: Access Firestore in the app

Let's define the content of Create component in src/component/index.js
class Create extends Component {
  constructor() {
    super();
    this.ref = firebase.firestore().collection('items'); // [1] retrieve items reference
    this.state = {
      field1: '',
      field2: '',
    };
  }
  onChange = (e) => {
    const state = this.state;
    state[e.target.name] = e.target.value;
    this.setState(state);
  };
  onSubmit = (e) => {
    e.preventDefault();
    const { field1, field2 } = this.state;
    this.ref.add({                                     // [2] Insert by using firestore SDK
      field1,
      field2,
    }).then((docRef) => {
      this.setState({
        field1: '',
        field2: '',
      });
      this.props.history.push("/")
    }).catch((error) => {
      console.error("Error adding document: ", error);
    });
  };

  render() {
    const { field1, field2 } = this.state;
    return (
      <div>
        <div>
          <div>
            <h3>
              Add Item
            </h3>
          </div>
          <div>
            <h4>Link to="/" >Items List</Link></h4>
            <form onSubmit={this.onSubmit}>
              <div>
                <label htmlFor="title">field1:</label>
                <input type="text" name="field1" value={field1} onChange={this.onChange}  />
              </div>
              <div>
                <label htmlFor="title">field2:</label>
                <input type="text" name="field2" value={field2} onChange={this.onChange} />
              </div>
              <button type="submit">Submit</button>
            </form>
          </div>
        </div>
      </div>
    );
  }
}

export default Create;
It seems a lot of code but the key points are [1] and [2] where we use the firestore SDK to add a new item in the database directly from the client app. the call in [2] is going to be revisited in next blog post to make usage of cloud function.

Step 6️⃣: Deploy on firebase

So we build a small test app accessing firestore DB let's deploy it on the cloud with Firebase tooling πŸ‘ !
  • Start running a production build
    $ npm run build
    
  • Install firebase tools
    $ npm install -g firebase-tools
    $ firebase login
    
  • Init function
    $ firebase init
    
    • Step 1: Select the Firebase features you want to use: Firestore Hosting. For now we focus only on deploying ie: hosting the app
    • Step 2: Firebase command-line interface will pull up your list of Firebase projects, where you pick firebase-crud.
    • Step 3: Keep the default for the Database Rules file name and just press enter.
    • Step 4: Pay attention to the question about public directory, which is the directory that will be deployed and served by Firebase. In our case it is build, which is the folder where our production build is located. Type “build” and proceed.
    • Step 5: Firebase will ask you if you want the app to be configured as a single-page app. Say "yes".
    • Step 6: Firebase will warn us that we already have build/index.html. All fine!
  • deploy!
    $ firebase deploy
    ...
    ✔  Deploy complete!
    
        Project Console: https://console.firebase.google.com/project/test-83c1a/overview
        Hosting URL: https://test-83c1a.firebaseapp.com
    


Where to go from there?

In this blog post you've seen how to configure and deploy an SPA on firebase and how to set up a Firestore DB. Next blog post, you'll see how to write you first Google Cloud Function. Stay tuned.

Thursday, September 13, 2018

Unpublish a npm package

Last week, I was playing with semantic-release. Giving your CI control over your semantic release. Sweet. I should dedicate a writing on it (to come later).
Nevertheless, I got in a situation that an erroneous version number get released (wrong commit message). Without a major version bump, a breaking change in the lib won't be reflecting (breaking the whole purpose of semantic release). 😱😱😱😱

Unpublish a "recent" version


If you try to unpublish a version just released:
$ npm publish .
+ launcher-demo@5.0.0
$ npm unpublish launcher-demo@5.0.0                   
- launcher-demo@5.0.0

It's ok! Pff you can do it. πŸ˜…πŸ˜…πŸ˜…πŸ˜…
Now is it possible later to publish the same version?
$ npm publish .                    
npm ERR! publish Failed PUT 400
npm ERR! code E400
npm ERR! Cannot publish over previously published version "5.0.0". : launcher-demo

It makes sense you can't use the same version, so if you update package.json to 5.0.1:
$ npm publish .
+ launcher-demo@5.0.1

Just fine!

Unpublish a "old" version


Let's say I want to unpublish a version released last week:
$ npm unpublish launcher-demo@3.2.8
npm ERR! unpublish Failed to update data
npm ERR! code E400
npm ERR! You can no longer unpublish this version. Please deprecate it instead

Thanks npm for your kind suggestion, let try to deprecate it with an short message:
$ npm deprecate launcher-demo@3.2.8 'erronous version'

At least now the package is visible as deprecated, trying to pull it will display a deprecate warning.
$ npm i launcher-demo@3.2.8
npm WARN deprecated launcher-demo@3.2.8: erronous version


Unpublish policy


"Old", "recent" version. What does it all mean? Let's check the npm unpublish policy

Quote: If the package is still within the first 72 hours, you should use one of the following from your command line:
  • npm unpublish -f to remove the entire package thanks to the -f or force flag
  • npm unpublish @ to remove a specific version

Some considerations:
Once package@version has been used, you can never use it again. You must publish a new version even if you unpublished the old one.
If you entirely unpublish a package, nobody else (even you) will be able to publish a package of that name for 24 hours.

After the one-developer-just-broke-Node buzzy affair in March 2016, the unpublish policies were changed. A 10-lines library used every where should not put the whole JS community down. A step toward more immutability won't arm.

Where to go from there


Error releasing your package?
You've got 72 hours to fix it. πŸ‘πŸ‘πŸ‘πŸ‘
otherwise deprecate it.
Maybe, it's time to automate releasing with your CI. πŸ˜‡πŸ˜‡πŸ˜‡πŸ˜‡


















Sunday, June 25, 2017

Dirty secrets on dependency injection and Angular - part 2

In the previous post "Dirty secrets on dependency injection and Angular - part 1", you've explored how DI at component level, can produce different instances of a service. Then you've experienced DI at module level. Once a service is declared using one token in the AppModule, the same instance is shared across all the modules and components of the app.

In this article, let's revisit DI in the context of lazy-loading modules. You'll see the feature modules dynamically loaded have a different behaviour.

Let's get started...

Tour of hero app


Let's reuse the tour of heroes app that you should be familiar with from our previous post. All source code could be find on github.

As a reminder, in our Tour of heroes, the app displays a Dashboard page and a Heroes page. We've added a RecentHeroCompoent that displays the recently selected heroes in both pages. This component uses the ContextService to store the recently added heroes.

In the previous blog, you've worked your way to refactor the app and introduced a SharedModule that contains RecentHeroCompoent and use the ContextService. Let's refactor the app to break it into more feature modules:
  • DashboardModule to contain the HeroSearchComponent and HeroDetailComponent
  • HeroesModule to contain the HeroesComponent


Features module


Here is a schema of what you have in the lazy.loading.routing.shared github branch:


DashboardModule is as below:
@NgModule({
  imports: [
    CommonModule,
    FormsModule,
    DashboardRoutingModule, // [1]
    HeroDetailModule,
    SharedModule            // [2]
  ],
  declarations: [
    DashboardComponent,
    HeroSearchComponent
  ],
  exports: [],
  providers: [
    HeroService,
    HeroSearchService
  ]
})
export class DashboardModule { }

In [1] you define DashboardRoutingModule.

In [2] you import SharedModule which defines common components like SpinnerComponent, RecentHeroesComponent.

HeroModule is as below:
@NgModule({
  imports: [
    CommonModule,
    FormsModule,
    HeroDetailModule,
    SharedModule,  // [1]
    HeroesRoutingModule
  ],
  declarations: [ HeroesComponent ],
  exports: [
    HeroesComponent,
    HeroDetailComponent
  ],
  providers: [ HeroService ] // [2]
})
export class HeroesModule { }

In [1] you import SharedModule which defines common components like SpinnerComponent, RecentHeroesComponent.
Note in [2] that HeroService is defined as provider in both modules. It could be a candidate to be provided by SharedModule. This service is stateless however. Having multiple instances won't bother us as much as a stateful service.

Last, let's look at AppModule:
@NgModule({
  declarations: [ AppComponent ], // [1]
  imports: [
    BrowserModule,
    FormsModule,
    HttpModule,
    SharedModule,     // [2]
    InMemoryWebApiModule.forRoot(InMemoryDataService),
    AppRoutingModule  // [3]
  ],
  providers: [],      // [4]
  bootstrap: [ AppComponent ],
  schemas: [NO_ERRORS_SCHEMA, CUSTOM_ELEMENTS_SCHEMA]
})
export class AppModule {}

In [1], the declarations section is really lean as most components are declared either in the features module or in the shared module.

In [2], you now import the SharedModule form AppModule. SharedModule is also imported in the feature modules. From our previous post we know, in statically loaded module the last declared token for a shared service wins. There is eventually only one instance defined. Is it the same for lazy-loading?

In [3] we defined the module for lazy loading, more in next section.

In [4], providers section is lean similar to declarations as most providers are defined at module level.

Lazy loading modules


AppRoutingModule is as below:
const routes: Routes = [
  { path: '', redirectTo: '/dashboard', pathMatch: 'full' },
  { path: 'dashboard',  loadChildren: './dashboard/dashboard.module#DashboardModule' }, // [1]
  { path: 'detail/:id', loadChildren: './dashboard/dashboard.module#DashboardModule' },
  { path: 'heroes',     loadChildren: './heroes/heroes.module#HeroesModule' }
]

@NgModule({
  imports: [ RouterModule.forRoot(routes) ],
  exports: [ RouterModule ]
})
export class AppRoutingModule {}

In [1], you'll define lazy load DashboardModule with loadChildren routing mechanism.

Running the app, you can observe the same syndrom as when we define ContextService at component level: DashboardModule has a different instance of ContextService than HeroesModule. This is easily observable with 2 different lists of recently added heroes.

Checking angular.io module FAQ, you can get an explanation for that behaviour:

Angular adds @NgModule.providers to the application root injector, unless the module is lazy loaded. For a lazy-loaded module, Angular creates a child injector and adds the module's providers to the child injector.

Why doesn't Angular add lazy-loaded providers to the app root injector as it does for eagerly loaded modules?
The answer is grounded in a fundamental characteristic of the Angular dependency-injection system. An injector can add providers until it's first used. Once an injector starts creating and delivering services, its provider list is frozen; no new providers are allowed.


What about if you what a singleton shared across all your app for ContextService? There is a way...

Recycle provider with forRoot


Similar to what RouterModule uses: forRoot. Here is a schema of what you have in the lazy.loading.routing.forRoot github branch:



In SharedModule:
@NgModule({
  imports: [
    CommonModule
  ],
  declarations: [
    SpinnerComponent,
    RecentHeroComponent
  ],
  exports: [
    SpinnerComponent,
    RecentHeroComponent
  ],
  //providers: [ContextService], // [1]
  schemas: [NO_ERRORS_SCHEMA, CUSTOM_ELEMENTS_SCHEMA]
})
export class SharedModule {

  static forRoot() {            // [2]
    return {
      ngModule: SharedModule,
      providers: [ ContextService ]
    }
  }
 }

In [1] remove ContextService as a providers. Define in [2] a forRoot method (the naming is an broadly accepted convention) that returns a ModuleWithProviders interface. This interface define a Module with a given list of providers. SharedModule will reuse defined ContextService provider defined at AppModule level.

In all feature modules, imports SharedModule.

In AppModule:
@NgModule({
  declarations: [
    AppComponent
  ],
  imports: [
    BrowserModule,
    FormsModule,
    HttpModule,
    //SharedModule, // [1]
    SharedModule.forRoot(), // [2]
    InMemoryWebApiModule.forRoot(InMemoryDataService),
    AppRoutingModule
  ],
  providers: [],
  bootstrap: [
    AppComponent
  ],
  schemas: [NO_ERRORS_SCHEMA, CUSTOM_ELEMENTS_SCHEMA]
})
export class AppModule {
}

In [1] and [2], replace the SharedModule imports by SharedModule.forRoot(). You should only call forRoot at highest level ie: AppModule level otherwise you will run in multiple instances.

To see the source code, take a look at lazy.loading.routing.forRoot github branch:

Where to go from there


In this blog post you've seen how providers on lazy-loaded modules behaves differently that in an app with eagerly loaded modules.

Dynamic routing brings its lot of complexity and can introduce difficult-to-track bugs in your app. Specially if you refactor from statically loaded modules to lazy loaded ones. Watch out your shared module specially if they provide services.

The Angular team even recommends to avoid providing services in shared modules. If you go that route, you still have the forRoot alternative.

Happy coding!

Friday, June 16, 2017

Dirty secrets on dependency injection and Angular - part 1

Let's talk about Dependency Injection (DI) in Angular. I'd like to take a different approach and tell you the stuff that surprise me when I've first learned them using Angular on larger apps...

Key feature from Angular even since AngularJS (ie: Angular 1.X), DI is a pure treasure from Angular, but injector hierarchy can be difficult to grasp at first. Add routing and dynamic load of modules and all could go wild... Services get created multiple times and if stateful (yes functional lovers, you sometimes need states) the global states (even worse πŸ˜…) is out of sync in some parts of your app.
To get back in control of the singleton instances created for your app singleton, you need to be aware of a few things.

Let's get started...

Tour of hero app


Let's reuse the tour of heroes app that you should be familiar with from when you first started at angular.io. Thansk to LarsKumbier for adapting it to webpack, I've forked the repo and adjust it to my demo's needs. All source code could be find on github.

In this version of Tour of heroes, the app displays a Dashboard page and a Heroes page. I've added a RecentHeroCompoent that displays the recently selected heroes in both pages. This component uses the ContextService to store the recently added heroes.


See AppModule in master branch.

Provider at Component level


Let's go to HeroSearchComponent in src/app/hero-search/hero-search.component.ts file and change the @Component decorator:
@Component({
  selector: 'hero-search',
  templateUrl: './hero-search.component.html',
  styleUrls: ['./hero-search.component.css'],
  providers: [ContextService] // [1]
})
export class HeroSearchComponent implements OnInit {

if you add line [1], you get something like this drawing:



Run the app again.
What do you observe?
The heroes page is working fine listing below the recently visited heroes. However going to Dashboard/SearchHeroComponent, the recently visited heroes list is empty!!

The recently added heroes is empty in HeroSeachComponent because you've got a different instance of ServiceContext. Dependency injection in Angular relies on hierarchical injectors that are linked to the tree of components. This means that you can configure providers at different levels:
  • for the whole application when bootstrapping it in the AppModule. All services defined in providers will share the same instance.
  • for a specific component and its sub components. Same as before but for Γ  specific component. so if you redefine providers at Component level, you got a different instance. You've overriden global AppModule providers.

Tip: don't have app-scoped services defined at component level. Very rare use-cases where you actually want


Provider at Module level


What about providers at module level, if we do something like:



Let's first refactor the code, to introduce a SharedModule as defined in angular.io guide. In your SharedModule, we put the SpinnerComponent, the RecentHeroComponent and the ContextService. Creating the SharedModule, you can clean up the imports for AppModule which now looks like:

@NgModule({
  declarations: [
    AppComponent,
    HeroDetailComponent,
    HeroesComponent,
    DashboardComponent,
    HeroSearchComponent
  ],
  imports: [
    BrowserModule,
    FormsModule,
    HttpModule,
    SharedModule,
    InMemoryWebApiModule.forRoot(InMemoryDataService),
    AppRoutingModule
  ],
  providers: [
    HeroSearchService,
    HeroService,
    ContextService
  ],
  bootstrap: [
    AppComponent
  ]
})
export class AppModule {}

Full source code in github here. Notice RecentHeroComponent and SpinnerComponent has been removed from declarations. Intentionally the ContextService appears twice at SharedModule and AppModule level. Are we going to have duplicate instances?

Nope.
A Module does not have a specific injector (as opposed to Component which gets their own injector). Therefore when AppModule provides a service for token ContextService and imports a SharedModule that also provides a service for token ContextService, then AppModule's service definition "wins". This is clearly stated in AppModule angular.io FAQ.

Where to go from there


In this blog post you've seen how providers on component plays an important role on how singleton get created. Modules are a different story, they do not provide encapsulation as component.
Next blog posts, you will see how DI and dynamically loaded modules plays together. Stay tuned.

Tuesday, May 30, 2017

Going Headless without headache

You're done with a first beta of your angular 4 app.
Thanks to Test Your Angular Services and Test your Angular component, you get a good test suite πŸ€—. It runs ok with a npm test on your local dev environment. Now, is time to automate it and have it run against a CI server: be Travis, Jenkins, choose your weapon. But most probably you will need to run you test headlessly.

Until recently the only way to go is to use PhantomJS, a "headless" browser that can be run via a command line and is primarily used to test websites without the need to completely render a page.

Since Chrome 59 (still in beta), you can now use Chrome headless! In this post we'll see how to go headless: the classical way with PhamtomJS and then we'll peek a boo into Chrome Headless. You may want to wait for official release of 59 (it should be expected to roll out very soon in May/June this year).

Getting started with angular-cli


Let's use angular-cli latest release (v1.0.6 at the time of writing), make sure you have install it in [1].
npm install -g @angular/cli  // [1]
ng new MyHeadlessProject // [2]
cd MyHeadlessProject
npm test // [3]
In [2], create a new project, let's call it MyHeadlessProject.
In [3], run your test. You can see by default the test run in watch mode. If you explore karma.conf.js:
module.exports = function (config) {
  config.set({
    ...
    port: 9876,
    colors: true,
    logLevel: config.LOG_INFO,
    autoWatch: true,
    browsers: ['Chrome'],  // [1]
    singleRun: false       // [2]
  });
If you switch [2] to false, you can go for a single run.
To be headless you would have had to change Chrome for PhantomJS.

Go headless with PhamtomJS


First, install the phantomjs browser and its karma launcher with:
npm i phantomjs karma-phantomjs-launcher --save-dev
Next step is to change the the karma configuration:
browsers: ['PhantomJS', 'PhantomJS_custom'],
customLaunchers: {
 'PhantomJS_custom': {
    base: 'PhantomJS',
    options: {
      windowName: 'my-window',
      settings: {
        webSecurityEnabled: false
      },
    },
    flags: ['--load-images=true'],
    debug: true
  }
},
phantomjsLauncher: {
  exitOnResourceError: true
},
singleRun: true
and don't forgot to import them at the beginning of the file:
plugins: [
  require('karma-jasmine'),
  require('karma-phantomjs-launcher'),
],
Running it you got the error:
PhantomJS 2.1.1 (Mac OS X 0.0.0) ERROR
  TypeError: undefined is not an object (evaluating '((Object)).assign.apply')
  at webpack:///~/@angular/common/@angular/common.es5.js:3091:0 <- src/test.ts:23952
As per this angular-cli issue, go to polyfills.js and uncomment
import 'core-js/es6/object';
import 'core-js/es6/array';
Rerun, tada !
It works!
... Until you run into a next polyfill error. PhantomJS is not the latest, even worse, it's getting deprecated. Even PhantomJS main maintainer Vitali is stepping down as a maintainer recommending to switch to chrome headless. It's always cumbersome to have a polyfilll need just for your automated test suite, let's peek a boo into Headless Chrome.

Chrome headless


First of all, you either need to have Chrome beta installed or have ChromeCanary.
On Mac:
brew cask install google-chrome-canary
Next step is to change the the karma configuration:
browsers: ['ChromeNoSandboxHeadless'],
customLaunchers: {
  ChromeNoSandboxHeadless: {
    base: 'ChromeCanary',
    flags: [
      '--no-sandbox',
      // See https://chromium.googlesource.com/chromium/src/+/lkgr/headless/README.md
      '--headless',
      '--disable-gpu',
      // Without a remote debugging port, Google Chrome exits immediately.
      ' --remote-debugging-port=9222',
    ],
  },
},
and don't forgot to import them at the beginning of the file:
plugins: [
  ...
  require('karma-chrome-launcher'),
],
Rerun, tada! No need to have any polyfill.

What's next?


In this post you saw how to run your test suite headlessly to fit your test automation CI needs. You can get the full source code in github for PhantomJs in this branch, and for Chrome Headless with Canary in this branch. Have fun and try it on your project!

Friday, May 19, 2017

Test your Angular component

In my previous post "Testing your Services with Angular", we saw how to unit test your Services using DI (Dependency Injection) to inject mock classes into the test module (TestBed). Let's go one step further and see how to unit test your components.

With component testing, you can:
  • either test at unit test level ie: testing all public methods. You merely test your javascript component, mocking service and rendering layers.
  • or test at component level, ie: testing the component and its template together and interacting with Html element.
I tend to use both methods whenever it makes the more sense: if my component has large template, do more component testing.

Another complexity introduced by component testing is that most of the time you have to deal with the async nature of html rendering. But let's dig in...

Setting up tests


I'll use the code base of openshift.io to illustrate this post. It's a big enough project to go beyond the getting started apps. Code source could be found in: https://github.com/fabric8io/fabric8-ui/. To run the test, use npm run test:unit.

Component test: DI, Mock and shallow rendering


In the previous article "Testing your Services with Angular", you saw how to mock service through the use of DI. Same story here, in TestBed.configureTestingModule you define your testing NgModule with the providers. The providers injected at NgModule are available to the components of this module.

For example, let's add a component test for CodebasesAddComponent a wizard style component to add a github repository in the list of available codebases. First, you enter the repository name and hit "sync" button to check (via github API) if the repo exists. Upon success, some repo details are displayed and a final "associate" button add the repo to the list of codebases.

To test it, let's create the TestBed module, we need to inject all the dependencies in the providers. Check the constructor of the CodebasesAddComponent, there are 7 dependencies injected!

Let's write TestBed.configureTestingModule and inject 7 fake services:
beforeEach(() => {
  broadcasterMock = jasmine.createSpyObj('Broadcaster', ['broadcast']);
  codebasesServiceMock = jasmine.createSpyObj('CodebasesService', ['getCodebases', 'addCodebase']);
  authServiceMock = jasmine.createSpy('AuthenticationService');
  contextsMock = jasmine.createSpy('Contexts');
  gitHubServiceMock = jasmine.createSpyObj('GitHubService', ['getRepoDetailsByFullName', 'getRepoLicenseByUrl']);
  notificationMock = jasmine.createSpyObj('Notifications', ['message']);
  routerMock = jasmine.createSpy('Router');
  routeMock = jasmine.createSpy('ActivatedRoute');
  userServiceMock = jasmine.createSpy('UserService');

  TestBed.configureTestingModule({
    imports: [FormsModule, HttpModule],
    declarations: [CodebasesAddComponent], // [1]
    providers: [
      {
        provide: Broadcaster, useValue: broadcasterMock // [2]
      },
      {
        provide: CodebasesService, useValue: codebasesServiceMock
      },
      {
        provide: Contexts, useClass: ContextsMock // [3]
      },
      {
        provide: GitHubService, useValue: gitHubServiceMock
      },
      {
        provide: Notifications, useValue: notificationMock
      },
      {
        provide: Router, useValue: routerMock
      },
      {
        provide: ActivatedRoute, useValue: routeMock
      }
    ],
    // Tells the compiler not to error on unknown elements and attributes
    schemas: [NO_ERRORS_SCHEMA] // [4]
  });
  fixture = TestBed.createComponent(CodebasesAddComponent);
 });

In line [2], you use useValue to inject a value (created via dynamic mock with jasmine) or use a had crafted class in [3] to mock your data. Whatever is convenient!

In line [4], you use NO_ERRORS_SCHEMA and in line [1] we declare only one component. This is where shallow rendering comes in. You've stubbed services (quite straightforward thanks to Dependency Injection in Angular). Now is the time to stub child components.

Shallow testing your component means you test your UI component in isolation. Your browser will display only the DOM part that directly belongs to the component under test. For example, if we look at the template we have another component element like alm-slide-out-panel. Since you declare in [1] only your component under test, Angular will give you error for unknown DOM element: therefore tell the framework it can just ignore those with NO_ERRORS_SCHEMA.

Note: To compile or not to compile TestComponent? In most Angular tutorials, you will see the Testbed.compileComponents, but as specified in the docs this is not needed when you're using webpack.

Async testing with async and whenStable


Let's write your first test to validate the first part of the wizard creation, click on "sync" button, display second part of the wizard. See full code in here.
fit('Display github repo details after sync button pressed', async(() => { // [1]
  // given
  gitHubServiceMock.getRepoDetailsByFullName.and.returnValue(Observable.of(expectedGitHubRepoDetails));
  gitHubServiceMock.getRepoLicenseByUrl.and.returnValue(Observable.of(expectedGitHubRepoLicense)); // [2]
  const debug = fixture.debugElement;
  const inputSpace = debug.query(By.css('#spacePath'));
  const inputGitHubRepo = debug.query(By.css('#gitHubRepo')); // [3]
  const syncButton = debug.query(By.css('#syncButton'));
  const form = debug.query(By.css('form'));
  fixture.detectChanges(); // [4]

  fixture.whenStable().then(() => { // [5]
    // when github repos added and sync button clicked
    inputGitHubRepo.nativeElement.value = 'TestSpace/toto';
    inputGitHubRepo.nativeElement.dispatchEvent(new Event('input'));
    fixture.detectChanges(); // [6]
  }).then(() => {
    syncButton.nativeElement.click();
    fixture.detectChanges(); // [7]
  }).then(() => {
    expect(form.nativeElement.querySelector('#created').value).toBeTruthy(); // [8]
    expect(form.nativeElement.querySelector('#license').value).toEqual('Apache License 2.0');
  });
}));

In [1] you see the it from jasmine BDD has been prefixed with a f to focus on this test (good tip to only run the test you're working on).

In [2] you set the expected result for stubbed service call. Notice the service return an Observable, we use Observable.of to wrap a result into an Observable stream and start it.

In [3], you get the DOM element, but not quite. Actually since you use debugElement you get a helper node, you can always call nativeElement on it to get real DOM object. As a reminder:
abstract class ComponentFixture {
  debugElement;       // test helper 
  componentInstance;  // access properties and methods
  nativeElement;      // access DOM
  detectChanges();    // trigger component change detection
}

In [4] and [5], you trigger an event for the component to be initialized. As the the HTML rendering is asynchronous per nature, you need to write asynchronous test. In Jasmine, you used to write async test using done() callback that need to be called once you've done with async call. With angular framework you can wrap you test inside an async.

In [6] you notify the component a change happened: user entered a repo name, some validation is going on in the component. Once the validation is successful, you trigger another event and notify the component a change happened in [7]. Because the flow is synchronous you need to chain your promises.

Eventually in [8] following given-when-then approach of testing you can verify your expectation.

Async testing with fakeAsync and tick


Replace async by fakeAsync and whenStable / then by tick and voilΓ ! In here no promises in sight, plain synchronous style.
fit('Display github repo details after sync button pressed', fakeAsync(() => {
  gitHubServiceMock.getRepoDetailsByFullName.and.returnValue(Observable.of(expectedGitHubRepoDetails));
  gitHubServiceMock.getRepoLicenseByUrl.and.returnValue(Observable.of(expectedGitHubRepoLicense));
  const debug = fixture.debugElement;
  const inputGitHubRepo = debug.query(By.css('#gitHubRepo'));
  const syncButton = debug.query(By.css('#syncButton'));
  const form = debug.query(By.css('form'));
  fixture.detectChanges();
  tick();
  inputGitHubRepo.nativeElement.value = 'TestSpace/toto';
  inputGitHubRepo.nativeElement.dispatchEvent(new Event('input'));
  fixture.detectChanges();
  tick();
  syncButton.nativeElement.click();
  fixture.detectChanges();
  tick();
  expect(form.nativeElement.querySelector('#created').value).toBeTruthy();
  expect(form.nativeElement.querySelector('#license').value).toEqual('Apache License 2.0');
}));


When DI get tricky


While writing those tests, I hit the issue of a component defining its own providers. When your component define its own providers it means it get its own injector ie: a new instance of the service is created at your component level. Is it really what is expected? In my case this was an error in the code. Get more details on how dependency injection in hierarchy of component work read this great article.

What's next?


In this post you saw how to test a angular component in isolation, how to test asynchronously and delve a bit in DI. You can get the full source code in github.
Next post, I'll like to test about testing headlessly for your CI/CD integration. Stay tuned. Happy testing!

Sunday, May 14, 2017

RivieraDEV 2017: What a great edition!

RivieraDEV is over. 2 days of conferences, workshop and fun.
This 2017 edition was placed under the sign of...
Cloud.

First, cloudy weather for the speaker diner on Wednesday...
Second, lots of talks about the Cloud with OpenShift, Kubernetes, Docker in prod, Cloud Foundry and even some clusters battle with chaos monkey. πŸ΅πŸ™ˆπŸ™‰πŸ™Š

The first day started with the keynote from the RivieraDEV team where Sebastien with a virtual Mark Little, announced the first joined edition with JUDCon. To follow, Nadir Kadem's presentation on hacking culture and Hendrik Ebbers beautifully crafted slides on a project life gives the tone of RivieraDEV: a conference for developers. With 3 tracks, you have to make a call, here are the presentations I've picked.
  • Edson Yanaga's session on Kubernetes and OpenShift. The presentation starts form Forbes' quote: "Now every company is a software company" and focus on how to organize your team using the right tools to deliver the best business value to end users. A/B testing, features toggle, monitoring is made easy with OpenShift with a zero downtime deployment.
  • Nodejs on OpenShift by Lance Ball who shows case how to do a source to image on OpenShift console and get the latest nodejs version available. the presentation also illustrates with a short demo on circuit breaker for your micro services in nodejs.
  • Lunch break with socca: if you don't know what it is, you ought to come to 2018 edition and taste it!
  • Stephanie Moallic talks about how to control a robot from a hand crafted ionic2 mobile app. From under 100 euros you can get you arduino-based robot! The only limit you have is your imagination.
  • Francois Teychene tells us his experience on running Docker on production clusters. So, with docker you can't get rid of your ops department?
  • Jean-franΓ§ois Garreau demos the new physical web API. Lots of new cool stuff to try out to enhance your web site UX with some vibration, notification...
  • Last presentation is a BOF session on Monade where Philippe, Nicolas, Guillaume and Gauthier illustrate functions like map, switchMap with bottles, pears and alcohol. If you don't get fluent on functional programming, I'll put it on the drinks.

For the second day, Matthias kills the rumour on the French Riviera weather: The keynotes starts on accessibility and quality by Aurelien Levy, a subject very often overlooked. You, as a developer have the power to change other person's life. To carry on "with great powers come great responsibilities", Guillaume Champeau talks about ethics in IT and the privacy concern when googling.
Again, with 3 tracks, here are the presentations I've picked.
  • Julien Viet talsk about Http2 multiplexing theory. With a very visual demo of http1 vs http2 verticles loading in high latency images. Here is the link: http2-showcase! Also I get out of the presentation, with the willing to dig a bit more GRPC with protocol buffer to even encode better and reduce payload.
  • To follow, Thomas presents Reactive programming with Eclipse Vert.X. With a live demo, he shows a verticle with reactive wrapper. I've learned about Single RxJava class, a special case of single-event Observable. I need to dig in Thomas' music store demo.
  • Back for some docker, with skynet vs monkey planet fight. I really enjoy the light tone of the presentation.
  • Josh Long demos how to deploy on Cloud Foundry using AB testing, zuul configuration... Nice demo, I'm even on the demo. Thanks my friend 😊
  • Some CSS novelties with GridLayout, I'm not yet ready for it, still learning. Thanks Raphael for the nice intro, quite in-depth some time, I'll need to review some slides.

So this is the end, time to say good-bye my friends. Thanks to all our speakers to join us to make a great edition. Last but not least, thanks to all the attendees for coming and make the edition so special: best attendee record this year, over 400!
See you all for 2018 edition.

PS: If you want to read more blog post on RivieraDEV, I recommend Fanny's post.

Tuesday, May 9, 2017

Testing your Services with Angular

Have you ever joined a project to find out it is missing unit tests?
So as the enthusiastic new comer, you've decided to roll your sleeves πŸ’ͺ and you're up to add more unit tests. In this article, I'd like to share the fun of seeing the code coverage percentage increased πŸ“Š πŸ“ˆ in your angular application.

I love angular.io documentation. Really great content and since it is part of angular repo it's well maintained an kept up-to-date. To start with I recommend you reading Testing Advanced cookbook.

When starting testing a #UnitTestLackingApplication, I think tackling Angular Services are easier to start with. Why? Some services might be self contained object without dependencies, or (more frequently the only dependencies might be with http module) and for sure, there is no DOM testing needed as opposed to component testing.

Setting up tests


I'll use the code base of openshift.io to illustrate this post. It's a big enough project to go beyond the getting started apps. Code source could be found in: https://github.com/fabric8io/fabric8-ui/

From my previous post "Debug your Karma", you know how to run unit test and collect code coverage from Istanbul set-up. Simply run:
npm test // to run all test
npm run test:unit // to run only unit test (the one we'll focus on)
npm run test:debug // to debug unit test 

When starting a test you'll need:
  • to have an entry point, similar to having a main.ts which will call TestBed.initTestEnvironment, this is done once for all your test suite. See spec-bundle.js for a app generated using AngularClass starter.
  • you also need a "root" module similar (called testing module) to a root module for your application. You'll do it by using TestBed.configureTestingModule. This is something to do for each test suite dependent on what you want to test.
Let's delve into more details and talk about dependency injection:

Dependency Injection


The DI in Angular consists of:
  • Injector - The injector object that exposes APIs to us to create instances of dependencies. In your case we'll use TestBest which inherits from Injector.
  • Provider - A provider takes a token and maps that to a factory function that creates an object.
  • Dependency - A dependency is the type of which an object should be created.
Let's add unit test for Codebases service. The service Add/Retrieve list of code source. First we need to know all the dependencies the service uses so that for each dependencies we define a provider. Looking at the constructor we got the information:
@Injectable()
export class CodebasesService {
  ...
  constructor(
      private http: Http,
      private logger: Logger,
      private auth: AuthenticationService,
      private userService: UserService,
      @Inject(WIT_API_URL) apiUrl: string) {
      ...
      }

The service depends on 4 services and one configuration string which is injected. As we want to test in isolation the service we're going to mock most of them.

How to inject mock TestBed.configureTestingModule


I choose to use Logger (no mock) as the service is really simple, I mock Http service (more on that later) and I mock AuthenticationService and UserService using Jasmine spy. Eventually I also inject the service under test CodebasesService.
beforeEach(() => {
      mockAuthService = jasmine.createSpyObj('AuthenticationService', ['getToken']);
      mockUserService = jasmine.createSpy('UserService');

      TestBed.configureTestingModule({
        providers: [
        Logger,
        BaseRequestOptions,
        MockBackend,
          {
            provide: Http,
            useFactory: (backend: MockBackend,
              options: BaseRequestOptions) => new Http(backend, options),
            deps: [MockBackend, BaseRequestOptions]
          },
          {
            provide: AuthenticationService,
            useValue: mockAuthService
          },
          {
            provide: UserService,
            useValue: mockUserService
          },
          {
            provide: WIT_API_URL,
            useValue: "http://example.com"
          },
          CodebasesService
        ]
      });
    });

One thing important to know is that with DI you are not in control of the singleton object created by the framework. This is the Hollywood concept: don't call me, I'll call you. That's why to get the singleton instance created for CodebasesService and MockBackend, you need to get it from the injector either using inject as below:
beforeEach(inject(
  [CodebasesService, MockBackend],
  (service: CodebasesService, mock: MockBackend) => {
    codebasesService = service;
    mockService = mock;
  }));

or using TestBed.get:

 beforeEach(() => {
   codebasesService = TestBed.get(CodebasesService);
   mockService = TestBed.get(MockBackend);
});

To be or not to be


Notice how you get the instance created for you from the injector TestBed. What about the mock instance you provided with useValue? Is it the same object instance that is being used? Interesting enough if your write a test like:
it('To be or not to be', () => {
   let mockAuthServiceFromDI = TestBed.get(AuthenticationService);
   expect(mockAuthService).toBe(mockAuthServiceFromDI); // [1]
   expect(mockAuthService).toEqual(mockAuthServiceFromDI); // [2]
 });

line 1 will fail whereas line 2 will succeed. Jasmine uses toBe to compare object instance whereas toEqual to compare object's values. As noted in Angular documentation, the instances created by the injector are not the ones you used for the provider factory method. Always, get your instance from the injector ie: TestBed.

Mocking Http module to write your test


Using HttpModule in TestBed


Let's revisit our TestBed's configuration to use HttpModule:
beforeEach(() => {
      mockLog = jasmine.createSpyObj('Logger', ['error']);
      mockAuthService = jasmine.createSpyObj('AuthenticationService', ['getToken']);
      mockUserService = jasmine.createSpy('UserService');

      TestBed.configureTestingModule({
        imports: [HttpModule], // line [1]
        providers: [
          Logger,
          {
            provide: XHRBackend, useClass: MockBackend // line [2]
          },
          {
            provide: AuthenticationService,
            useValue: mockAuthService
          },
          {
            provide: UserService,
            useValue: mockUserService
          },
          {
            provide: WIT_API_URL,
            useValue: "http://example.com"
          },
          CodebasesService
        ]
      });
      codebasesService = TestBed.get(CodebasesService);
      mockService = TestBed.get(XHRBackend);
    });

By adding an HttpModule to our testing module in line [1], the providers for Http, RequestOptions is already configured. However, using an NgModule’s providers property, you can still override providers (line 2) even though it has being introduced by other imported NgModules. With this second approach we can simply override XHRBackend.

Mock http response


Using Jasmine DBB style, let's test the addCodebase method:
it('Add codebase', () => {
      // given
      const expectedResponse = {"data": githubData};
      mockService.connections.subscribe((connection: any) => {
        connection.mockRespond(new Response(
          new ResponseOptions({
            body: JSON.stringify(expectedResponse),
            status: 200
          })
        ));
      });
      // when
      codebasesService.addCodebase("mySpace", codebase).subscribe((data: any) => {
        // then
        expect(data.id).toEqual(expectedResponse.data.id);
        expect(data.attributes.type).toEqual(expectedResponse.data.attributes.type);
        expect(data.attributes.url).toEqual(expectedResponse.data.attributes.url);
      });
  });

Let's do our testing using the well-know given, when, then paradigm.

We start with given: Angular’s http module comes with a testing class MockBackend. No http request is sent and you have an API to mock your call. Using connection.mockResponse we can mock the response of any http call. We can also mock failure (a must-have to get a 100% code coverage πŸ˜‰) with connection.mockError.

The when is simply about calling our addCodebase method.

The then is about verifying the expected versus the actual result. Because http call return RxJS Observable, very often service's method that use async REST call will use Observable too. Here our addCodebase method return a Observable. To be able to unwrap the Observable use the subscribe method. Inside it you can access the Codebase object and compare its result.


What's next?


In this post you saw how to test a angular service using http module. You can get the full source code in github.
You've seen how to set-up a test with Dependency Injection, how to mock http layer and how to write your jasmine test. Next blog post, we'll focus on UI layer and how to test angular component.

Thursday, April 27, 2017

Debug my Karma

I've just started getting my fingers in Angular. I've worked briefly with AngularJS in the past.
Although, I'm doubtless - you know the saying: testing is doubting 😁
I've always started looking at new framework with unit testing in mind. In this post, I'll use Google team term Angular stands for post 2.0, in there I'll use the latest release ie: 4.0.

When writing test, it is sometimes useful to debug the test using devtools (come on... no console log, vintage time is over πŸ‘ΎπŸ‘ΎπŸ‘Ύ). Let's see how to debug your test suite with Karma... It all depends on how your started your project: Let's see how to get a comfortable environment...

With angular2-webpack-starter


Follow README instructions:

git clone --depth 1 https://github.com/angularclass/angular2-webpack-starter.git
cd angular2-webpack-starter
npm install
npm test

When you run it you can see a blinking browser opening, running the test and closing. To be able to debug, open package.json and add:

{
  "name": "angular2-webpack-starter",
   ...
  "scripts": {
  "test:debug": "karma start --no-single-run --browsers Chrome",
  }
}

Now just run npm run test:debug. Karma is now in single mode therefore Chrome stays opened!
Easy to debug simply cmd + alt + I to open devtools.
Also, note that the coverage report is ran and now visible!

With angular-cli project


Let's open a shell, you'll need node 6.5+ / npm 3+, install angular-cli globally:

npm install -g @angular/cli

Create a project with angular cli and run the test:

ng new ngx-unit-test
cd ngx-unit-test
ng test

NOTE: npm test is an alias to ng test (as most projects nowadays use npm script. Very handy as you don't need to globally install ng-cli!)

ng test will bring chrome as per default. Easy to debug simply cmd + alt + I to open devtools. Open your favourite editor, change the code. The tests are re-run. Easy-peasy, not much to do. What about if you want to run the coverage tool?
From angular-cli wiki:

ng test --code-coverage --single-run

karma.conf.js the source of truth


In the end, it all boils down to karma.conf.js configuration file. Wether by default you're running continuously in watch mode (dev friendly) or your test run with PhantomJS (CI/CD friendly), this is up to Karma configuration. It is however always good to have a build command to override and offer dev and CI friendly build.

Happy coding! Happy testing!

Tuesday, March 14, 2017

Sharing the fun of DevNexus 2017

1 day of workshop, 2 days of conference sessions and so many tracks to choose from.
This is DevNexus 2017 in Atlanta!
A fun conference to be on πŸ€—
This year I had the pleasure to be invited as a speaker for my talk: the trials and tribulations of a polyglot cross-patform mobile developer.
I also took the opportunities to attend as many conferences as possible. My theme this year was reactive programming.

Here is some miscellaneous notes, mumblings and souvenirs from this edition.

Wednesday was workshop day. I've picked Venkat's workshop on Building reactive applications. As I always said: Venkat is always a good value, you never get disappointed, there is always something to learn. In this workshop, Venkat tells us what functional programming is all about: function composition and lazy evaluation. I did all the exercice with RxJS, the other guys around me used RxJava, it was funny to see how concise is the JavaScript version 😜

Thursday morning starts with Venkat's keynote on Don't walk away from Complexity, Run. It was a very inspirational keynote, one of my favourite quote is: Coding is not a work, coding is an addiction. πŸ‘ πŸ‘πŸ½ πŸ‘πŸΎ πŸ‘πŸΏ
I've been addicted for 20 years and I can't get over it 😜
The next sessions I've attended:
  • Introducing TypeScript 2.0 by James Sturtevant: Great new addition to TypeScript 2.0 is non-nullable type, you can activate the check by adding --strictNullChecks flag to tsc command lien. To read more about non-nullable type, visit this blog post. I like the notion of union type and the easy sugar syntax Type? for optional type (converted to union type) that reminds me Swift syntax. When used with dot operator, the optional type will need to be unwrapped.
  • Promises and Generators in ES6 by Jennifer Bland. I really like Jennifer's biography, a senior developer who was part of Lotus Domino on the AS/400 and is reconverted into JavaScript development. Our job as a developer is a continuous learning exercise.
  • Gradle Worst Practices: Common anti-patterns in Gradle builds by Gary Hale where you learn 10 anti patterns for performance, maintenance, correctness and usability. Useful tip I'll keep in mind: make your build immutable by using use @Input / @outputDirectory annotation rather than using variables.
  • Overview of Webpack, a module bundler by Pavan Podila: one of my favourite presentation for the rich content. I've learnt a lot, I followed Pavan as he went through his step by step github example. Thanks Paven to have added the lazy loading step upon my request! I will write a separate blog post on the topic. If you work with angular2 or ReactJS, you've used webpack, you might use angular-cli or react-create-app that hides its configuration away but getting your way with plain webpack will come handy.


Friday morning keynote was all about dancing with elephants with Burr:

Keynote was followed by my ✨ presentation ✨ the trials and tribulations of a polyglot cross-patform mobile developer. I have great time delivering the presentation, this one was slightly different than the one I used to give: less technical but full of anecdotes and feelings. You can see my hand-crafted slides here. Better than thousand words, it can be all sum up with these drawings:


This is the process a developer as an early adopter goes through ^^^
It's all about about emotions 😭 πŸ˜₯ 😱 πŸ˜‚ πŸ˜‰ 😍
I have no doubts that sharing your feelings with you peer developers at work will make you a better communicator. As been very often the only female developer in a team, I used to think, I’d rather not show I’m a sensitive person. It could be interpreted as a weakness. But looking around me, I see developers (like you), people who can troll flame war on crucial subjects like tabs vs spaces in your IDE and they do it with passion and emotions. In fact, I’ve recently ran into this article from Jim Whitehurst, our Red Hat CEO, where he said that “showing emotion at work is simply a reflection of a person's passion”. I couldn’t agree more with that quote.

Afternoon I've attended Rx.js cleans up the async JavaScript mess by @codefoster where we deep dive the github examples v4 on RxJS and we explore the v5 RxJS repo. Timeflies is my favourite animation. I loves the cool effect and it reminds me Brian Leathem's DevNation talk. In the end, it was a great complement from Venkat's workshop and a good entry point for examples to look at. I might contribute to port those samples to version 5.

Time flies when you have fun, DevNexus 2017 was a fantastic edition, too many tracks and great subjects to choose from. I'll have to come back next year, that's for sure.

Tuesday, April 19, 2016

Watch tutorial 6: Watch Connectivity - Application Context

This post is part of a set of short tutorials on Watch. If you want to see the previous post. In this tutorial, you're going to see how you can communicate between your Watch and your iOS app using Application Context.

Get starter project

In case you missed Watch tutorial 5: Watch Connectivity - Direct Message, here are the instructions how to get the starter project. Clone and get the initial project by running:
git clone https://github.com/corinnekrych/DoItCoach.git
cd DoItCoach
git checkout step5
open DoItCoach.xcodeproj

Send Application context from Phone

In DoItCoach/DetailedTaskViewController.swift, search for the method timerStarted(_:), and add one line of code in [1]:
func sendTaskToAppleWatch(task: TaskActivity) {
  if WCSession.defaultSession().paired && delegate.session.watchAppInstalled {    // [1]
    try! delegate.session.updateApplicationContext(["task": task.toDictionary()]) // [2]
  }
}
[1]: Before sending to the Watch, as a best practice, check is the Watch is paired and the app in installed on the Watch. No need to do a context update when it's doomed to failure.
[2]: You send the Context update. Here your don't try catch, but you could do it and display the error.

You need to import WatchConnectivity to make Xcode happy.
Still in DoItCoach/DetailedTaskViewController.swift call sendTaskToAppleWatch(_:) in timerStarted(_:) as done in [1] (Note all the rest of the method is unchanged):
@objc public func timerStarted(note: NSNotification) {
  if let userInfo = note.object, 
     let taskFromNotification = userInfo["task"] as? TaskActivity 
     where taskFromNotification.name == self.task.name {
     if let sender = userInfo["sender"] as? String 
         where sender == "ios" {
         task.start()
         sendTaskToAppleWatch(task) // [1]
     }
     saveTasks()
     self.startButton.setTitle("Stop", forState: .Normal)
     self.startButton.setTitle("Stop", forState: .Selected)
     self.circleView.animateCircle(0, color: taskFromNotification.type.color, 
                                   duration: taskFromNotification.duration)
  }
  print("iOS app::TimerStarted::note::\(note)")
}

Receive Message in Watch app

In DoItCoach WatchKit Extension/ExtensionDelegate.swift, at the end of the class definition, add the following extension declaration:
// MARK: WCSessionDelegate
extension ExtensionDelegate: WCSessionDelegate {
  func session(session: WCSession, didReceiveApplicationContext applicationContext: [String : AnyObject]) {
    if let task = applicationContext["task"] as? [String : AnyObject] { // [1]
      if let name = task["name"] as? String,
         let startDate = task["startDate"] as? Double {
         let tasksFound = TasksManager.instance.tasks?.filter{$0.name == name} // [2]
         let task: TaskActivity?
         if let tasksFound = tasksFound where tasksFound.count > 0 {
           task = tasksFound[0] as TaskActivity
           task?.startDate = NSDate(timeIntervalSinceReferenceDate: startDate)  // [3]
           dispatch_async(dispatch_get_main_queue()) {  // [4]
             NSNotificationCenter.defaultCenter().postNotificationName("CurrentTaskStarted", 
                                                                       object: ["task":task!])
           }
         }
       }
     }
  }
}
[1]: You get the dictionary definition of the task that was started on the iPhone.
[2]: You find its matching Task object in the list of tasks in the Watch.
[3]: You assign the startDate defined on the iOS app.
[4]: You make sure you go to UI thread to send a notification for the Watch to refresh its display.

Refreshing Watch display

In DoItCoach WatchKit Extension/InterfaceController.swift in awakeWithContext(_:), add one line of code [1] to register to the event CurrentTaskStarted:
override func awakeWithContext(context: AnyObject?) {
  super.awakeWithContext(context)
  NSNotificationCenter.defaultCenter()  // [1]
                      .addObserver(self, 
                                   selector: #selector(InterfaceController.taskStarted(_:)), 
                                   name: "CurrentTaskStarted", 
                                   object: nil)
  display(TasksManager.instance.currentTask)
}
Still in DoItCoach WatchKit Extension/InterfaceController.swift implement the following methods to respond to the NSNotificationCenter event:
func taskStarted(note: NSNotification) { 
  if let userInfo = note.object,  
     let taskFromNotification = userInfo["task"] as? TaskActivity,
     let current = TasksManager.instance.currentTask
     where taskFromNotification.name == current.name { 
    replayAnimation(taskFromNotification)           // [1]
  }
}
    
func replayAnimation(task: TaskActivity) {
  if let startDate = task.startDate  {
    let timeElapsed = NSDate().timeIntervalSinceDate(startDate) 
    let diff = timeElapsed < 0 ? abs(timeElapsed) : timeElapsed
    let imageRangeRemaining = (diff)*90/task.duration   // [2]
    self.group.setBackgroundImageNamed("Time")
    self.group.startAnimatingWithImagesInRange(NSMakeRange(Int(imageRangeRemaining), 90), 
               duration: task.duration - diff, repeatCount: 1) // [3]
  }
}
[1]: For the current task, replay the animation.
[2]: Calculate how much is images is already started. You will have a short delay since the task was started in the iPhone and you received it on the Watch.
[3]: As you've seen in Tutorial3: Animation, launch the animation.

Build and Run

You can now start a task from your phone. The careful reader that you are, will notice that once a task started from the phone is completed, it is not refreshed on the Watch app. That brings us to the next section, let's talk about your challenges.

Challenges left to do

Your mission, should you choose to accept it is:
  • make the task list refreshed on the Watch when a task started from your phone get completed
  • remove the bootstrap code in TaskManager.swift. All tasks should be persisted to the iPhone (all the persistence code is already written for you in Task.swift). When the iPhone app launch send the list of tasks to Watch. Whenever a task is added on the phone, send the list of tasks to the watch.
  • make the animation carries on where it should be when the Watch app go background and foreground again.

Get final project

If you want to check the final project, here are the instructions how to get it.
cd DoItCoach
git checkout step6
open DoItCoach.xcodeproj
Or if you want to get the final project with all the challenges implemented:
cd DoItCoach
git checkout master
open DoItCoach.xcodeproj

What's next?

With this tutorial, you saw how you can send update application context messages from your Watch to your phone. Since you know how to communicate between your app and your watch, you're all ready to make great apps!

Watch tutorial 5: Watch Connectivity - Direct Message

This post is part of a set of short tutorials on Watch. If you want to see the previous post. In this tutorial, you're going to see how you can communicate between your Watch and your iOS app.

How does my Watch talk to my phone, and vice versa?

WatchConnectivity framework provides different options for implementing a bi-directional communication between WatchKit and iOS apps.
  • Application Context Mode: allows exchange of data serialized in a dictionary object from one app and another. The transfer is done in the background. The messages are queued and delivered to the receiving app via a delegate method. One specificity of Application Context mode is that only the latest update is sent (ie: older data is overwritten by the new data). This is perfect if the receiving app only need the latest state.
  • User Information transfer mode is similar to application context mode. It is also a background mode, message get queued and unlike application context all messages will be sent once the destination app is available.
  • Interactive messaging mode sends messages (serialized in dictionary) immediately to the receiving app. The receiving app is notified of the message arrival via a delegate method call.
Whether you send a message from your iOS app or from your Watch app, the method to call is the same on both devices. Similarly when you receive a remote call the delegate method to use is the same. You'll get the "dΓ©jΓ  vu" feeling when developing with WatchConnectivity especially for bi-directional messages.

Although, there is a symmetry of usage of WatchConnectivity framework, choosing which option to use (queued messages vs direct messages) really depends on your use case and where do you send it from. Time to dig into the nitty-gritty of Direct Messages.

Get starter project

In case you missed Watch tutorial 4: Animation, here are the instructions how to get the starter project. Clone and get the initial project by running:
git clone https://github.com/corinnekrych/DoItCoach.git
cd DoItCoach
git checkout step4
open DoItCoach.xcodeproj

The Use Case

Let's start using WatchConnectivity with this use case in mind. You want to start the task on your Watch and be able to see it as started on your iPhone. From Apple Documentation:

Calling this method from your WatchKit extension while it is active and running wakes up the corresponding iOS app in the background and makes it reachable. Calling this method from your iOS app does not wake up the corresponding WatchKit extension. If you call this method and the counterpart is unreachable (or becomes unreachable before the message is delivered), the errorHandler block is executed with an appropriate error. The errorHandler block may also be called if the message parameter contains non property list data types.

If I call sendMessage(_:replyHandler:) from my Watch, it has the ability to wake-up my iOS app. How cool!

I prefer to use Direct Messages for actions from the watch -> iPhone. The other way around iPhone -> Watch is less useful, as the chances that your Watch app is active when you send DM from your phone is slim.

Direct Message from the Watch to your Phone are useful to say "hi phone, go fetch me these resources", but bear in mind, they are not queued, so if your phone is switch off or out of range, they fail and will not be delivered.

As a rule of thumb, when synchronizing data use Application Context or UserInfo. We could have used ApplicationContext, but we'll design DoITCoach Watch App to be a companion app of the watch. All the states are persisted on the Phone.

Initialise WCSession

WCSession.defaultSession() returns a singleton object. You still need to define which object will handle delegate methods and activate the session.

Where?

For the Watch app, the best place to do it is ExtensionDelegate.swift as this is the place where the life cycle of the app takes place.

How?

In DoItCoach WatchKit App Extension/ExtensionDelegate.swift, add the import:
import WatchConnectivity
var session : WCSession!
func applicationDidFinishLaunching() {
  // Perform any final initialization of your application.
  if (WCSession.isSupported()) {
    session = WCSession.defaultSession()
    session.delegate = self
    session.activateSession()
  }
}
The compiler is now complaining because your class has to implement WCSessionDelegate. In DoItCoach WatchKit App Extension/ExtensionDelegate.swift afte3r the calss definition, add the extension declaration:

extension ExtensionDelegate: WCSessionDelegate {}

Send DM from Watch to Phone

In DoItCoach Watch Extension/InterfaceController.swift add the method to do the send:
func sendToPhone(task: TaskActivity) {
  let applicationData = ["task": task.toDictionary()]
  if session.reachable { // [1]
    session.sendMessage(applicationData, replyHandler: {(dict: [String : AnyObject]) -> Void in
      // handle reply from iPhone app here
      print("iOS APP KNOWS Watch \(dict)")
    }, errorHandler: {(error) -> Void in
      // catch any errors here
      print("OOPs... Watch \(error)")
    })
  } 
}
At the beginning of the file add an import WatchConnectivity.

[1]: As a best practice, you can check the app on iOS device is reachable so you don't waste a call.

Still in DoItCoach Watch Extension/InterfaceController.swift call sendToPhone(_:) in onStartButton() as done in [1] (Note all the onStartButton is unchanged):
@IBAction func onStartButton() {
  guard let currentTask = TasksManager.instance.currentTask else {return} 
  if !currentTask.isStarted() { 
    let duration = NSDate(timeIntervalSinceNow: currentTask.duration)
    timer.setDate(duration)
    // Timer fired
    NSTimer.scheduledTimerWithTimeInterval(currentTask.duration,
                                           target: self,
                                           selector: #selector(NSTimer.fire),
                                           userInfo: nil,
                                           repeats: false) 
    timer.start() 
    // Animate
    group.setBackgroundImageNamed("Time")
    group.startAnimatingWithImagesInRange(NSMakeRange(0, 90), duration: currentTask.duration, repeatCount: 1)
    currentTask.start()
    startButtonImage.setHidden(true) 
    timer.setHidden(false) 
    taskNameLabel.setText(currentTask.name)
    sendToPhone(currentTask) // [1]
  }
}
You also need to send a message to your phone once the task is finished in [1]:
func fire() {
  timer.stop()
  startButtonImage.setHidden(false)
  timer.setHidden(true)
  guard let current = tasksMgr.currentTask else {return}
  print("FIRE: \(current.name)")
  current.stop()
  group.stopAnimating()
  // init for next
  group.setBackgroundImageNamed("Time0")
  display(tasksMgr.currentTask)
  sendToPhone(current) // [1]
}

Receive Message in iOS app


Where?

For the iOS app, the best place to do it is AppDelegate.swift as this is the place where the life cycle of the app takes place. You want to be able to receive direct message even when your iOS app is not started.

How?

var session : WCSession!
func application(application: UIApplication, didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?) -> Bool {
  if (WCSession.isSupported()) {
    session = WCSession.defaultSession()
    session.delegate = self
    session.activateSession()
  }
  return true
}
Don't forget to import WatchConnectivity.
DΓ©jΓ  vu feeling?
;)

Delegate implementation

// MARK: WCSessionDelegate
extension AppDelegate: WCSessionDelegate {
  func session(session: WCSession, didReceiveMessage message: [String : AnyObject], replyHandler: ([String : AnyObject]) -> Void) {
    print("RECEIVED ON IOS: \(message)")
    dispatch_async(dispatch_get_main_queue()) { // [1]
      if let taskMessage = message["task"] as? [String : AnyObject] {
        if let taskName = taskMessage["name"] as? String {
          let tasksFiltered = TasksManager.instance.tasks?.filter {$0.name == taskName}
          guard let tasks = tasksFiltered else {return}
          let task = tasks[0]                   // [2]
          if task.isStarted() {
            replyHandler(["taskId": task.name, "status": "already started"])
            return
          }
          if task.endDate != nil {
            replyHandler(["taskId": task.name, "status": "already finished"])
            return
          }
          if let endDate = taskMessage["endDate"] as? Double {
            task.endDate = NSDate(timeIntervalSinceReferenceDate: endDate)
            replyHandler(["taskId": task.name, "status": "finished ok"])
            NSNotificationCenter.defaultCenter().postNotificationName("TimerFired", // [3]
                                                 object: ["task":self])
          } else if let startDate = taskMessage["startDate"] as? Double {
              task.startDate = NSDate(timeIntervalSinceReferenceDate: startDate)
              replyHandler(["taskId": task.name, "status": "started ok"])
          }
          saveTasks()    // [4]
        }
      }
    }
  }
}
[1]: You need to dispatch to main thread as eventually we want to refresh the UITableView in the UI queue.
[2]: You get the task name from the dictionary. You find the matching task in iOS app (task name is used as an identifier).
[3]: You set either startDate or endDate on the task itself. When you end the task, you need to issue an event so that UITableView get refreshed.
[4]: You save all tasks.

Get final project

If you want to check the final project, here are the instructions how to get it.
cd DoItCoach
git checkout step5
open DoItCoach.xcodeproj


Build and Run

Before launching the app, delete any previous version of DoItCoach on your Phone.




What's next?

With this tutorial, you saw how you can send direct messages from your phone to your watch. As you've seen, you can do a lot with direct message: wake up an iOS app but there are still cases where your message won't reach your phone. When it comes to synchronise states between AppleWatch and its iPhone companion app, Application Context or User Info transfer mode are much more suitable. See Watch tutorial 6: Watch Connectivity (Application Context) to learn more.