এই পৃথিবীর মায়ায় ছুঁয়ে থাকা রোদ্দুর

এই পৃথিবীর মায়ায় ছুঁয়ে থাকা রোদ্দুর,
এমন আপন করে তোমাকে পাবো না।

হাতে রাখা হাত, কিছু অলীক রাত
জোছনা দেখবোনা হয়তো জীবন এটাই।
আমায় ডেকোনা ঘুড়ি শূণ্য লাটাই।

কোথাও থাকবো না, হয়তো জীবন এটাই
ফিরে আসবো না ঘুড়ি শূণ্য লাটাই।
কোথাও থাকবো না, আমি থাকবো না।

How is the weather (at home) today?

“How is the weather (at home)? (বাসায় আবহাওয়া কেমন?)”,

My brother Quazi Obaida used to ask me this question whenever he was late to return home from outside, usually the soccer or cricket field. Like any other primary (and higher) school kids I guess we used to push our parents to check the boundaries, you know what I mean right? Now being a dad of two cheeky little ones and getting a taste of my own medicine, I must say our parents were extremely patient. 

And “weather” being the mood of our parents, আম্মা (mom) in particular, if s/he is angry about anything. Sunny and beautiful weather basically meant “no worries”, no scolding, no slap on the bottom even if that was actually due. Bad weather on the other hand meant a lot of things, and none of them you would look forward to. I mean who would want weather predictions the likes of “chance of rain“, “thunderstorm“, “severe thunderstorm with a chance of hailstorm” or even a freaking tornado. And you know how quickly these things escalate, right? Maybe if we stopped that cricket match at midday and came back home for lunch, we could’ve avoided the hailstorm and probably settled for the heavy rain.

But it wasn’t like I could call a bot remotely or query a web service and check the weather (mood) at home, right?

Neither could we predict if there will be a thunderstorm in advance. There was data (indicators) all around (how mum is answering my question in short answers, not smiling while talking to me, etc.) but we weren’t yet old enough to process that, let alone make predictions for the next two days.

But why I am writing about this? Well, an interesting thing happened a few months back, while driving home after picking my kid up from school, out of nowhere I asked him

“baba, how is the weather at home today?”

He seemed as confused as my voice sounded doubtful, thinking, what the heck did I just ask? It felt like deja-vu.

Now this is the bit you might find interesting. Can you predict how someone is feeling based on his/her behavior? I am sure you can, up to a point. If you actively look for signs you can probably draw conclusions.

Now back to the behavior bit, as we are all somehow connected to some sort of digital lifestyle, our daily behavior probably affects that digital footprint. You know, cat videos, YouTube clips, browsing through photo albums, listening to sad songs on loop, browsing profiles, etc., the usual ones. Even though there are no accurate one-to-one mappings between your mood and behavior (rather digital behavior), if you have enough data to map actual events, and know someone close enough to validate that information, you will start seeing patterns in that digital data.

You probably should have already guessed, the easiest way to get data about someone is to go through their log/ browsing history. Like any other mediocre Dev I searched stack-overflow and GitHub gist instead of writing something from scratch. At that point all I wanted to do was create a very simple PoC.

This is what I wanted to do in the first iteration. As a daily basis:

  • Read my log every day and dump that into a csv with date and url;
  • Analyze the log on 3 data points only (Music, Video, News/Article/Site, i.e. links other than music and video);
  • Create a predefined taxonomy/vocab of weighted tags (this is completely biased with no automatic learning involved);
  • Add the weighted value for the analysed log;
  • Identify/create a relationship between total weight and the current mood, either happy or not happy (sad/annoyed/and what not, remember the Silicon Valley not hotdog app); and
  • Validate, go back and fix the weighted taxonomy lookup table.

The above was my expectation, what happened was quite the opposite.

The first day (in reality 2nd day) there was not much data (forgot to not use incognito mode), and after 3/4 days I found that I’m very conscious about what I’m doing, so clearly it wasn’t the actual reflection at all.

I stopped my PoC and due to workload eventually forgot to check further.

After 2/3 weeks, out of curiosity I checked the log and noticed loads of interesting data. It turns out my better half had been using it instead of her laptop.

Now this became a moral dilemma. I definitely don’t want to go through what my better half was browsing, but at the same time I was curious whether my simple algorithm could predict how her day had been.

So I told her how anyone with access to the computer can see the browsing history and also showed her the incognito feature and how it doesn’t get logged (now here is a piece of advice, please don’t try to educate your better half and especially about the incognito mode, as you will have to answer more that you want to).

Now that it was off my chest (well sort of), I started validating the data and had to tweak the weight in the lookup table to adjust the actual result. And in some cases I had to revisit and change the value because actual results on certain days felt incorrect, after few days when the actual symptom was revealed. I wanted all this to be automated but it felt like I’m almost tweaking it every day and the patterns aren’t persistent. I had to rethink the approach and reevaluate my taxonomy table. I added another column relating to likely, highly likely and unsure (only used assertive path, didn’t even bother to go towards the non-assertive path). Now that seemed to simplify the manual involvement quite a bit.

After 3 to 3.5 months of data the prediction seems to be working most of the time (again, sort-of). The next step was to create a web service to query the data and I should have used Laravel but was being lazy and used one of local Drupal dev instance instead. I used feeds to periodically import the data and expose it using viewsjson. This was a very funny but pleasing feeling. From my driveway I can call the webservice running on my Dev environment and see the weighted score of happiness (or unhappiness or absence of data).

Now if I really want I can use any NLP provider like DialogFlow or Lex and literally ask “how is the weather at home today?“.

Or something like below:

Ok, I was joking… I didn’t end up doing that last bit. I imagine my smart phone assistant can remind me by saying, “hey dumbo, you better get some flowers for her today”. As ecstatic as it sounds from the PoC’s point of view, in reality it will sound very lame and pathetic.

It will be a sad day if I need to blindly rely on some lines of code rather than “looking at my better half” and sense if she’s not happy.

I don’t know how she does it, but I’m sure she doesn’t need to go through my logs to get it right. But I just wanted to see if my little setup will work, I was curious to see if it actually can flag whether I need to bring flowers home today.

So, how is the weather at your home today?

* First published on LinkedIn 

অনু গল্প / লুঙ্গি

Lunch hour discussion over gruesome aspects about #ResidentEvil7 (#re7)

My Colleague:
You haven’t seen anything yet, wait for the boathouse level. You will need a fresh pair of underwear
Me (smiling):
I am almost certain, I wont.
Also Me:
(I guess he doesnt know the concept of #লুঙ্গি / LUNGI and I have no intention to elaborate on that as it will led him to mislabel it as a wierd version of male FROCK anyway)

What’s under your bed (Over Architecting a solution)?

So it was almost the end of school holiday and even though I promised my little one that we would do something fun project together, it didn’t happen. So on the last weekend we decided to build something together with tons of Legos that he possesses. So we started and pretty soon we end up talking about how do we get the missing Legos out from under the bed.

Only if we could see what’s under the bed. My kid said. And all on a sudden the lego project turned into this new fun project.

Send something under the bed (in the 3rd room) and see that as a live feed in our TV (in the living room) and control the whole thing from there (you know like you see in movies). Of-course do this with no visit to shop (i.e. using tools whatever we have at home).

I know it sounds so silly, but I promise you the project was so much fun!

Things you will need

Let’s go through the things you will need. I am pretty sure you already have all of these (in some form) components

  • Two relatively modern mobile devices (better if one of them is Tablet)
  • Google Home (installed on the mobile)
  • Teamviewer (installed on both devices Quicksupport edition on the mobile and Remote control edition on the Tablet)
  • A remote controlled car (with headlight)
  • TV with chromecast or Smart TV (android based) or Media player with chromecast built-in (e.g. MiiBox)
  • WiFi (devices connected on the same network)
  • Better-half not around

Setup

  • Make sure all the devices are connected to the same WiFi network
  • The remote control is working fine with the car and you know the range (like from your living room to 3rd room)
  • Install TeamViewer Quick Support on the mobile device (it will ask you to verify your device)
  • Install TeamViewer Remote Control on the Tablet (or similar device)
  • Send a request to connect the mobile from your tablet through TeamViewer (by using the ID)
  • Once connected you can now operate your mobile remotely using your Tablet (as at this point your mobile is mirrored on Tablet screen)
  • Go to Google Home and select cast screen and select your connected TV
  • If the previous step is done correctly, at this point, your mobile screen is now mirrored both on the Tablet (through TeamViewer with full control) and  on TV (through chromecast and you have display only control)
  • Strap the mobile to the remote controlled car in a fashion that the camera is not obstructed (and of course stay steady when in motion).

And the silly fun begins

  • Sit (even though my little one was jumping most of the time) in front of your TV and get the Tablet ready. Turn the camera on of your mobile and you should be able to see live feed from your mobile device (at this point if it doesn’t feel a bit of adventurous then either you are too serious or maybe I am too silly. My kid was holding the tablet as if it was a scene from Star Trek).
  • Use the car remote to navigate the car
  • Find the treasures/monsters (in my case mostly legos) under the bed (don’t forget to turn the car light on once under the bed)
  • Switch camera if you want to reverse.

Cautions

  • If you are using your own precious mobile make sure your kid (or you) is not racing the car on the narrow hallway as I am pretty sure it’s going to bounce into walls at some point to which your better half won’t be much amused.

Food for thought

Needless to say my kid was very proud of this (silly!) project and he (as well as part of me) wanted to show off that to mommy. After listening and seeing a brief demo, I honestly couldn’t tell if she was impressed or amused or annoyed. So I thought I probably should give it a bit legitimacy by saying,

you know how once I gave you an example of architecting a solution by combining existing frameworks and in the process eliminating the need of reinventing the wheel?

Not sure even if she remembered that but she asked

so how are you guys getting things out from there?

I was gonna say Who wants to get the things out in the first place instead I said

that’s the next phase.

To that she gave me THAT look and went to the garage and got back with a broom stick and a torch and used that to swipe a lot of Legos from under the bed.

My just turned 9 year old almost shouted (I think he echoed me)

But mommy that’s not cool (comn! compared to our version of the solution? Its not!)!

My face expression probably supported my kids voice, to which she said (with a bit of smile in her face, and I want to believe that was a smile)

So you went through all these troubles to devise a solution just so that you don’t have to ‘reinvent’ the wheel (and to make the solution ultra cool, to show-off). Don’t tell me ‘real world solutions’ in your world are done like that.

Well that was awkward! (I literally had a flashback of all those different flashy frameworks in my head). I knew she was joking but I wanted to protest! I wanted to say it’s not the same thing yet I didn’t/couldn’t.

My kid probably sensed the unusualness in my silence, we (bap beta)  both looked at each other and silently agreed

“Mommy is such a party pooper!”

Building (and integrating) ChatBot using govCMS (SaaS), NodeJS and Dialogflow

It all started around 5/6 months ago when we got an email from the govCMS team for our feedback on how govCMS team can improve their help/support section regarding govCMS in general. I still remember De’leon and I talked about our very positive feedback (in one of our tea/coffee breaks) internally and that break/talk eventually turned (as happens most of the times) into a brainstorming session.

TL;DR (just read the title, and watch the demo video)

End user behaviour and background to this POC

Long story short, we identified a few common patterns regarding govCMS topics/questions that our stakeholders ask (the ones you would expect from most stakeholders). Depending on the roles, the query varied/varies from can I get this ‘x’ web feature/component in govCMS to can I integrate this ‘y’ service to how can I do this ‘z’ task in govCMS. Sometimes these queries really are very vague in nature and mostly depend on the theme/component or framework you build (or are going to provide) for your stakeholders (on top of the basic govCMS framework) and I am sure you can already guess why simply Googling is not enough for intended answers due to this vagueness. To tackle most of these obvious questions about end users’ daily routine tasks that one need to do in govCMS (or any other CMS really), we created a CMS training area for our clients which has step-by-stepSOP and screen-cast bundled as a Knowledge Base system (funny enough it’s built on govCMS). This KB/help-system works almost fine but every now and then we still get similar queries over MS Lync (from stakeholders/team-mates) our integrated communication tool. And this behaviour (queries/conversations over Lync) is not only limited to govCMS, but ranges from SharePoint, ASP/HTML framework, general JS/jQuery, integration with existing forms etc. and the audiences are also very diverse, spanning from multi OU and Departments (and sections) having different roles (and I am sure you know what it is like).

If you only want to see our implementation skip the text below and jump to the video. I promise (hope) that would be less boring than these texts 😉

I do get/understand this approach (way of communication). It’s like asking your team mates about (let’s say) your internal framework class (or a bootstrap class) name before you actually ask Google, or see the reference site or your user guide site. We all do that and I have a feeling we do that deliberately. It’s more engaging this way than typing into Google to see flood of information and then sort/find the one you need (and sometimes you get stuck in trial and error maze before you land on the one you were actually looking for).

Now that brainstorming session along with this repetitive Lync conversation behaviour (that I just mentioned above) triggered this idea.

Why not create a Bot and integrate that with our KB system.

We can then integrate the whole thing in the KB website and our users can ask the same questions and with a bit of tweaking in our existing KB IA/Design the Bot can answer rather accurately or show the correct path to get more closely related information (similar behaviour of what we do over Lync when we respond, only this time it’s by a Bot and not Me or De’leon).

Now that you know the background, before going any further, let’s see the Bot in action and then it will probably make more sense (see the video below).

Demo

 

Please note this is a POC project that was built in our free time (so expect a bit of crudeness).

 

As you can see the Bot can respond to the following ways:

  • Step by step guide
  • Video
  • URL link
  • SOP
  • Layout (details and suggestions)
  • Code repository (like private gist in github)
  • User Guide
  • Component etc.

A conversational example

Just to illustrate how the conversation works, here is an example. You can ask like,

  • can you show me some layouts?

And the Bot would reply

  • Which framework

As there are multiple frameworks (e.g. SharePoint) once you say which framework it can fetch the layouts for that framework and ask further

  • Any particular layouts or all (three columns, two columns)

Once you say which one you are after (e.g. 2 columns) it can show you details of the available two columns layouts for that framework

Or you could ask for a code example, or class name or a video tutorial and it would try to respond accordingly (if there is data in the KB system)

Please note: this list can increase really very quickly if the design is not done properly but then again you are only limited by your design.

As you have seen in the video above, it can cover all these queries.

So how does this work?

Well you probably already have a basic (or advanced) idea about how a typical Bot works (as they say 2018 is the net year for Bots). You also probably heard about the big providers (Google, FaceBook, Microsoft etc). We have used Dialogflow (no particular reason) for our POC. In very broad sense requests in a typical Bot in Google’s Dialoglflow framework involve these steps:

  1. User asks question (in our case through a chat window embedded in HTML in the web browser)
  2. The requested text gets parsed by JS (integrated in the chat window) and sent to Dialoglflow (DF)
  3. DF selects suitable intents (that we defined beforehand) and invokes fulfilment (FF) if required.
  4. If Web-hook (WH) is provided for the FFDF invokes that WH
  5. DF does a POST request (defined in its FF section) to the service URL (in our case its a node server)
  6. Our node server (NS) extracts the incoming info (passed by DF) and do required data operation on the server (as required) and reply back to DF with the result.
  7. DF evaluates the response, validates and send the final reply back to the JS (our browser where the chat window resides)
  8. JS updates the chat window and renders the reply
  9. DF is ready for next request.

That was a very very general explanation of the data flow (a lot happens under the hood). But for simplicity’s sake, keeping that in mind let’s expand this process a bit more as for our scenario we actually need to do a bit more data manipulation (that has dependency on API call) before we send our reply to DF. Like we mentioned before we want to integrate govCMS as our content service and for that this flow can be extended as shown below, again do note, very superficial explanation (please excuse my rough sketch)

Lets identify the steps

(you can see steps 1-4 are exactly the same)

  1. User asks question (in our case through a chat window embedded in HTML in the web browser)
  2. The requested text gets parsed by JS (integrated in the chat window) and sent to Dialoglflow (DF)
  3. DF selects suitable intents (that we defined beforehand) and invokes fulfilment (FF) if required.
  4. If Web-hook (WH) is provided for the FFDF invokes that WH
  5. DF does a POST request defined in its fulfilment section (FF) to the service URL (in our case its a node server)
  6. Our node server (NS) extracts info passed by DF (intents/actions and required parameters) but this time it requires more info based on the parameters it received to get back to DF
  7. NS restructures the parameters and call an API (this is our govCMS API) to get required information
  8. Our govCMS (D7) responds to the API call and spits out json data back to NS
  9. NS validates the data and restructure the json data to suitable format for DF and sends it back to DF
  10. DF evaluates the response, validates and send the final reply back to the JS of the WB. (please note for the POC demo we are not validating the data)
  11. JS updates the chat window and renders the reply

Are you confused yet? No?, well let me try again ;).

Demo

See the video (and again excuse my rough sketch) that shows the flow

Boring technical bits

It’s actually easier than you think. Hardest bit was (and will be) to come up with a good and extendable schema for your Knowledge Base (KB) entity. This is what we have done.

  • Define types of queries the end user can do on KB
  • Define enough intents and related contexts (parameters) to identify and cover wide aspects of KB items from a general user.
  • Finally define fields for KB content type (to support above two clauses) and make it flexible for future changes
  • Create APIs that uses those parameters and refine fields to produce structured json data
  • Get your choice of Bot framework provider (we used Dialoglflow in this POC) and recreate the contexts/parameters identified while creating related intents
  • Add training texts and provide responses
  • Mark mandatory/required parameters that are required to make the API call later on
  • Include Welcome/Small Talk (prebuilt) agent to handle the usual chit chat (conversation starter)
  • Test training and fix if the contexts are not picked up properly
  • Use Node.JS to handle Dialoglflow (DF)’s fulfillment POST for webhook (I mean we could use PHP to handle that, but we wanted to test out nodejs)
  • In Node.JS extract the required parameter(s) we need by catching the correct intent/action (simple if/else or switch)
  • Construct API request with the received parameters to filter/refine the result we want
  • Reformat the returned result in Node.JS and return it to DF
  • Once received data from DF in JS in our site before rendering reformat the data in HTML form so that we can have rich text output
  • Append the HTML to the chat window
  • Finally use css to style the appearance of the chat window

Looking forward

Now what’s the big deal about this?

Well, for this specific POC to be honest, not much really. It was more like to see an idea in action (after all it’s a POC, right?). But this was an interesting POC to say the least for couple of reasons

  • We wanted to use govCMS SaaS only (so no extra modules) distribution.
  • We wanted to build a system where the end user can populate the content so for any future KB item they can manage internally.
  • We wanted to use rich messages (but without tapping into 3rd party conversational/messaging platform i.e. slack or FB Messenger) in our website
  • We also wanted to inject our custom response before the message gets rendered (other than the provided rich messages)

But it can be a big deal if you can extend this properly for your end users (and hopefully we can). And when I say proper I mean the information your users are interested about specific to your Org, backed by user interaction data provided by analytics like GAor behaviour data provided by tracker likehotjar.

You can serve and control these info from a centralised single provider or multiple (different base APIs from different systems/providers) depending on how those systems are implemented. Bottom line is, it doesn’t matter, really as related services from both scenarios can be integrated/controlled in your NodeJS server. Couple of services that you can extend to come to mind straight away (see below).

  • Forms (leave, feedback etc.)
  • Procedures / Manuals (SOP) directory
  • Booking system (Carpooling etc.)
  • Shuttle service (and many similar common services)
  • People Directory system
  • Self serving kiosk (for general info) etc.

Lets not get carried away

Now back to where we started from. Using Lync as conversational medium for tuned and accurate results. And here is the interesting bit,

you can use Skype (or Skype for business) and can implement the same conversational behaviour (remember we talked about communicating through Lync?).

Yes, you will need to redefine the rich message section and it will work almost exactly the same way.

Wrapping up

Thanks for staying with me up to this. Let’s finish this article considering two more scenarios.

  • You ask the Bot you want to create a piece of content, the bot replies back to choose one from the allowed content types. Once chosen it then gives you option to select which layout you want to go for (again from allowed layouts for the selected content type)? Once chosen it asks you if you want any other components to be added. It may also give you option to choose different variant of the components (you know, views) Once that bit is done finally it can preview you what the content will look like using pre-built templates. All without typing a single line of code. Now, wouldn’t that be nice.

What I found that a visual with actual content rendered in actual layout(s) helps to mitigate gaps between different stakeholders during UI finalisation phase and helps to manage/set end user expectations.

  • Now for the second scenario (and you probably have seen/user this in Mobile apps), in hand held devices, for end users, provide option to search using voice (with integrated bot) so user can search by saying show me the topic “xyz” from last week or when is the event “abc” is happening?

But I am gonna pause here, that’s for a different day.

Let me know your thoughts!

Have you heard of Fortnite yet?

Have you heard of Fortnite yet?

Let me rephrase, do you have an 8 year or older kid(s) who plays games? If the answer is yes chances are I have your full attention now. Well, I have one 8y old and he plays games. I wasn’t that surprised when he asked me about Fortnite and how he can play the game.

So I read some reviews, watched few game-plays on YouTube (you know the usual routine). Needless to say the reviews were good and the game-play looked (vaguely) childish due to its cartoonish appearance but interesting. And yes it is very popular among young kids, everyone is talking about it.

Just to put it into perspective have a look at the stat. 125 million players combined in all platform and that’s in less than a year, 40 million players logging into play every month.

So I finally downloaded the game to try it out myself (yes I play games too, probably a bit too much) before giving access to my kid. Then something happened, after spending some times (hours) in one particular game mode (which is free) I couldn’t shake this odd feeling of the impact that the game can have on my child (or any child, even if s/he is 12 let alone 8)! It’s a bit odd to express what I feel about this game (for kids), rather I would share a snippet from one article shared to me by another parent/friend in my circle.

It doesn’t promote empathy or feelings for others. It’s about attacking, injuring and killing someone in a limited time.”  Plus, arguments stating that it’s harmless because it’s ‘cartoon-esque’ with no ‘blood and guts’ are a cop out. “No blood? Doesn’t matter. It’s the end result. Its cornering and killing off your opponent. You don’t need blood to engage in violence.”

Suddenly “PEGI 12” seems to have a new meaning (which I overlooked).

So, IMHO Fortnite is a bad (I know strong word!) choice for kids (again I am talking about kids only), and this is coming from a person who actually loves playing games and has 5 different game consoles (and one high-end PC) at home (and I am not bragging). From my experience of playing Battlefiled 4 (very vaguely similar in terms of clan/grouping, not game mode) a game that is way more mature, complicated, competitive and realistic, this is my observation:
  • It was addictive (way more than I thought it would be)
  • The feeling of missing out
  • It hampered my family life (and that happened to my colleague too)
  • The (meaningless) pressure of keep remaining in top 5 (yes I am good at that game and yes now I am bragging a bit) was insane
  • This virtual achievement that somehow made me feel awesome and cool, I couldn’t use it in my real life or social life (except the provider, game makers little virtual lobby where people know you by your alias) and pretty much forgotten by the provider itself as soon as a new version came out.

On the positive side, somehow I felt relieved of stress (whatever it was) for that time of period. What I realised later was that this urge of “escaping to this momentarily relieve of stress” and then “back to reality” (of more added stress) and again “going back to that urge” is the very much definition of addiction.

I haven’t played that game since (now that’s not bragging either).

Now why did I bore you guys with my gaming experience, because I can see Fortnite can become similar for kids/teenagers and IMHO it surely can have its toll on them, the ones I experienced/shared (this is where you start ranting about self-control, parental control, but I will leave that with you).

As for my kid we are settling on Rayman!

Two (To/Too/Two/2) minutes horror story

In the Salon (loud music and noise in background, finally when I got my queue),

What I said: “Not TOO short”

What my barber heard: TWO (size) and SHORT

And just like that in TWO (time) minutes we got a horror story. He was surprised that I wasn’t that surprised (not enough emotions) with the whole thing.

I guess he doesn’t know how common it is in my world where requirements get lost in translation (in those noises) and why I love Agile (on the flip side of the coin).

Damn switches, Why can’t they put those a bit higher?

“Oh damn switches, Why can’t they put those a bit higher?”

My little 2y+ old got to that height where she can reach all the switch boards at home and she thinks it is her solemn duty to make sure the lights in every room can be turned on-and-off without fail in regular interval.

So when her mom reacted like that, I said,

“Its not switch’s fault, the switch is fine… Its like UI and UX, you know. UI-wise its fine but who ever did the UX bit, didn’t think it through”

To that of my explanation, seeing her one eyebrow becoming a bow that’s almost ready to take off, I tried to explain a bit more,

“You know they did not do the user story properly… I mean…”

and at that point I had to stop as all on a sudden I felt I am kind of in a middle of a bad user story.

Damn!!

Baba (Dad) what do you do?

“Baba (Dad) what do you do?” asked my little boy who just turned 8.

I was busy reading an article which depressingly talks on why MS won’t (or maybe will) allow in-game KB/Mouse support in upcoming XBOX update(s) that kinda makes your inner pessimist-self go optimistic (at the same time). Due to the delay of me responding back to my kid, my better half chipped in and said,

“Well, dad is a Programmer”.

I felt so proud of my better half, I mean she said Programmer, someone called me that after sometime instead of Developer. Honesty when was the last time you crawled out of your developer shell and acted like a programmer? but let’s not talk about that).

Back to the scenario what I was talking about. My kid was a bit surprised and asked

“what is a Programmer?”

Well the question was thrown to my wife. For a moment I saw the blankness in her eyes (she did say something, I can’t even remember what was that, something to do with computer), and there you go, suddenly I felt like Chandler Bing (again)

Later that day I tried to explain what is the conventional trend for a typical average developer’s life.

I mentioned about XaaS (X or anything/everything as a Service) and library oriented development (i.e. don’t reinvent the wheel, open-source or not).

You translate your problems into little features and try to see what existing available libraries (by programmers) can already do that. You do try to minimise the footprint while combining those libraries but almost in every scenario you prefer an existing one rather than creating one by yourself specific to your needs. Even better you try to find something that does all of those for you (a.k.a. framework + platform) Once you figure out the integration bit you do a little tweaking/customisation, and then approach/deploy to the XaaS model (almost like you go iSelect on them) and you are almost done.

“So why does it make you sad?” she asked (sad? oh I guess she picked that up from my voice while I was explaining those things).

I can’t answer that so easily, can I? I am really not sad, it’s hard to explain what you feel, you feel a bit weird. It’s a bit like that cab driver. To the passenger the best cab driver is the one who can take him/her from point A to B in the shortest time.  As if the skill of driving is really comes second, its more about the knowledge on which roads and shortcuts are the best ones on a given time. In ideal scenario (as the condition mostly stays the same) the driving skill will not be so much appreciated where-as the route domain know-how will.

“But it’s not bad, right?”, she said…

Well you tell me. With roads, traffics and related data/variables properly defined with the help of real time cloud service integrated with AI, again integrated with autonomous car, what do you think will happen?  Who is not in the scenario anymore?

To that question, her eyes again went blank (but this time I think for proper reason).

OK, that’s probably bit far-fetched. But how about this? Let’s push this analogy a bit far (you know far-far). Let’s say, every (most) prominent software libraries has well defined common standard manifests (describing what they do/don’t/compatibility etc.) and now combine them with XaaS (mostly SaaS+IaaS) model. All you need now AI to kick in which can go iSelect on them.

The end user says,” I want this bit, that bit and maybe not that bit, tell me, what are my options”?

What do you think is going to happen? I asked.

“Why cant one be a good cab driver and know the ins-and-outs of the route at the same time”?  she bypassed my question (didn’t even ask what is SaaS or IaaS).

“… Well, my dear, the reason is that the road for the developer these days is so rapidly changing its a bit weird to become equally good at both in the time span of heading A to B…”

Of-course I didn’t say that out loud. A little bit GPS led navigation (with blind faith) or the idea of black-box led development is not that bad. I also didn’t say the fact that when I think of myself as a Programmer this instead makes me excited and shows me glimpse of new opportunities.

I didn’t say any of that out loud either. I haven’t lost my sanity, yet!

Instead, I just smiled (why ruin a developer’s dream, they are such beautiful dreams).

Building applications for your kid (Using Drupal, HTML+JS+CSS, Angular and Ionic)

It’s not that fun…“, that’s the default dialogue of my 7 year old kid whenever he starts reading books that he brings from school. It’s not that I don’t understand thatnot fun” bit, in fact I very much do.  The reading becomes more like a homework (responsibility), rather than a fun time, as content of most of the books  that he read, are really not in-sync with what goes inside the head of a typical year 2 student (if you know what I mean). This, becomes painfully apparent, when we, summarise the story, once the reading bit is done (you know, that informal Q/A thing to judge if he understood the story/narrative). Most of the time, he zones out from the actual story line and injects something completely different to what the book is actually talking about. But, when he does that, I can see the sparkle in his eyes, even though, it has nothing to do with the current story or the context, but that bit, that tendency of branching out does put back fun into reading (well, for him). So, one day, just out of curiosity, during his usual reading time, I told him, “how about you tell a story”.

“About what?”, he asked.

When I said “About anything you like”,

he asked me if he could “talk about his favourite MineCraft character “.

And oh boy, that was some fun story time (yes he loves playing MineCraft, PvZ Garden Warfare, Rayman etc). In the beginning it seemed nowhere near a story, it was just a 7 year old kid excitedly mumbling about the game, the new update and if he can buy that new add-on etc., but soon he started talking about the chapter 1 of the game, more like a storyteller, which to me sounded like a perfect story. I kept asking questions while he was talking and he was promptly answering them. It was almost like he was having the same level of fun that he experiences when he plays that game, only that he wasn’t playing.  So, next day I got him a book on Minecraft (and these books are expensive), but to my surprise, I found that even though initially he got happy to get something associated with Minecraft, that happy feeling evaporated pretty quickly when he realized he needs to read the book from start to finish (to be fair the book was meant for Y4 kids).  He was more interested to learn about the glitches of the game, or the hidden mission associated with the characters. In other words, bits and pieces of the book (that he loves) and not the whole book (did I mention that the book was expensive!).

At that moment, I decided to develop something that would give us (my wife and I) a very easy way to create short but targeted stories containing enough words (that we want him to learn) and pictures (static or animation) to grab his attention.

So this is what I did:

  • I needed a way to author stories, which means I needed a CMS, and I have chosen Drupal (WP would do fine as well). Using a CMS takes away the usual headache that comes with any CRUD operation on a piece of content.
    • Authentication
    • User roles (access)
    • Version control (workbench)
    • RTE
    • Validation
    • Workflow (with notification)
    • Mass import from popular formats (e.g. csv)
    • CRUD iteslf
    • Headless option
  • Now that I can create stories I need to provide a UI to load all those stories and show details once one of the story was selected. My requirements were very simple
    • It should run on popular devices/browsers (including iPad)
    • My kid should be able to browse the library easily
    • Once a book/story is selected he should be able read it like a book
    • New books should automatically appear in the library
  • To make it a bit more interesting, I thought the following features should be added as well
    • Able to flip the page like you do while reading book
    • Read out loud the content of the page
    • Read out loud any word that the user clicks/taps on

Lets have a look at the end product (select the video quality to 720p. You many need to increase the volume a bit to see the read-out-loud feature)

It’s very basic, more like a working prototype, but it does what it supposed to do. Most importantly my kid loves it. And I thought why not share this experience with you guys!

So how is it done. There are basically two parts.  

For back-end

  • As mentioned before using Drupal as CMS
  • Created a content type called Publication
  • Expose Publication through WS API (using views, I know I should have used D8)
    • For all publication listing (/API/json/publication-listing-all)
    • For a specific publication/story/book (/API/json/publication-details-by-id/994) cover details with an Id of 994
    • Finally to get all the pages for a specific publication/story/book (/API/json/publication-pages-by-id/994) with an Id of 994
  • Provide mass import option using Feeds

Now for front-end

  • Using Bootstrap as the front-end framework
  • Using JS for handling remote calls and UI interactions
  • Using turnJS for book flip effect
  • Using Speech Synthesis  Web APIs for read-loud content

I know I could use Drupal as headless API only way, but as I am using D7 and I wanted the Application to be very lite-weight I decided to build the front end from scratch.

Just to illustrate how easy it is to add a new book/story lets have a look at the following video (select the video quality to 720p. I forgot to record audio for this one).

As you can see you can add book/story the usual way or you can use the feeds importer to  mass import content.

As this is a standard JS web application, it runs very well on tablets as expected (on second gen iPads or even my old Samsung 3 10″ Tab). It was working fine, but then I thought  what about if we push it a bit more and make it a proper app. That way once the content is loaded it will be cached and the user can use it offline. With Ionic 3 and Angular 4 (with TypeScript) we can do that very easily. Our webservices are already defined, all we need to do is to consume that service and render the pages using Ionic UI component. Once compiled (and transpiled) with the help of Cordova CLI we can deploy our program as a proper App (Android/iOS/Windows phone).

Just to give you an idea this is what the App looks like running on Android phone (select the video quality to 720p. This is the actual rendering recorded on my phone )

Here are some important aspects of the app.

  • Nice native feel
  • Offline caching
  • Summary icons to show if the content/book is unlocked, option to add to favourite or go for a quiz/test
  • Favourite list option
  • Remove from favourite
  • Animated image on story page

We can probably add few more, but these are the ones my kid loves.

So, long story short this is what we did

Drupal + WS API -> HTMl5 + CSS3 + JS + JS Lib = Web application

Drupal + WS API -> HTML5 + SCSS + TS + Angular + Ionic -> Cordova = Mobile Application

Now how easy was that.