Developing SPA with Angular Material

Fast, convenient, tricky. These are the first three words that come to mind if someone asks how it feels to develop with Angular Material. The project’s documentation states the following right on the first page: “For developers using AngularJS, Angular Material is both a UI Component framework and a reference implementation of Google’s Material Design Specification. This project provides a set of reusable, well-tested, and accessible UI components based on Material Design.” Let’s take a close look at whether it is 100% true based on our extensive experience of developing SPAs with Angular Material here at DSR Corporation.

 Fast

Well, let’s drop all these subjective metrics and talk features:

  • Angular Material is a flex-based framework which provides an impressive set of tools to manipulate the layout. What does it give us? We can drop a huge amount of CSS purposed to position our DOM elements the way we want. Position inside a block is set with well-documented directives right in our HTML, which makes it quite easy to read.
  • Built-in nice animated dialogs.
  • Built-in services and directives to work with font-icons and SVG pictures with ability to switch between different icon sets and modify the icon style quick and painless.
  • Built-in toasts.
  • Mobile-friendly date picker.
  • Basic support of swipe actions.
  • Resource-friendly list that reuses DOM elements to render long scrollable lists in order to improve performance.
  • Built-in tooltips.

 Convenient

As its name suggests, Angular Material implements Google’s Material Design Specification. So if you want to follow Google’s guidelines you’ll find that many things work right out of the box as expected. Just keep in mind that this is one of many ways to implement it. Get ready to be flexible in your design and to alter it in favor of keeping your code clean. With great power comes great responsibility. With many built-in features, directives, services, animations, and CSS’ rules comes a hard-to-modify predefined behavior. We are not saying it’s impossible to change the way things work in Angular Material, but it would take another dirty hack to do it.

After all, convenience is a very subjective thing so here are some key features that should make your life easier:

  • Adjustable autofocus for dialogs and navigation bars
  • Beautifully animated buttons
  • Custom designed checkboxes
  • Custom designed selects
  • Built-in chips
  • Built-in complex menus and navigation bars
  • Animated input containers with support of ngMessages for error displaying and built-in text’s length counter
  • Custom designed radio buttons
  • Built-in sliders
  • Built-in switches
  • Animated tabs with custom actions on select and deselect

Tricky

Here comes a fly in the ointment. Since Angular Material is pretty young, it has all expected “puberty” problems. At the moment of writing this article it has 1545 open issues and 90 pending pull requests. That’s for a good reason: as long as it works fairly good under Chrome, it starts showing teeth under Firefox and constantly fails here and there under Safari, especially mobile Safari. If your target platform is Mac OS, you still can keep your code more or less readable, but cascade of hacks can bring your app to its knees in case you must make it work under OS X. Not to say that it’s not going to work in the end, but you will have to sacrifice some built-in features or spend hours making custom overrides, which kind of undermines the whole idea behind using Angular Material.

To sum up all of the above we can say that Angular Material is a promising powerful tool that can help you a lot and drastically improve your performance. Just keep in mind its current limitations and issues in order to not build an unmaintainable monster.

If you would like to learn more, have a project in mind, or want to share some comments, please connect with us at contact@dsr-company.com.

Modern Media and IP Video Systems: How to Choose the Best Formats and Technologies

Today there exist two standard ways of providing video content – “video on demand” and “live streaming.” “Video on demand” assumes that a client requests static media content from a server. “Live streaming,” means that media content is generated real-time and translated through a server to several clients.
In addition, media consumers use multiple types of devices for both ways of video playback:

  • Desktop browsers
  • Mobile devices
  • Smart TVs
  • Set-Top-Boxes
  • Game consoles

To cover most media traffic accessed via different methods and by different types of devices, it is important to cover the formats and technologies that make it possible.

MPEG-DASH Revolution

MPEG-DASH is Dynamic Adaptive Streaming over HTTP (DASH) technology that evolved since 2010 and was adopted by main hardware vendors and content providers. In 2016 this technology overpowered the Adobe Flash Player widely used in desktop browsers for media content playback, since all mainstream desktop browsers support MPEG-DASH.
The challenge remains for mobile devices and Apple TV that still do not support this technology well and primary use HTTP Live Streaming. Several custom MPEG-DASH implementations already exist for the main mobile platforms on the market, but it is not yet included in the native SDKs.

MP4 and WebM Formats

MPEG-DASH is format and codec agnostic, but currently the following formats and codecs are mostly used:

  • MP4 container with h.264 video and AAC audio codecs
  • WebM format with VP8/9 video codecs and Vorbis/Opus audio codecs

h.264 codec is well supported by software and hardware chips and it can be definitely the best choice today, but it imposes royalty fees if the provider charges customers for content.
VP8/9 is royalty free, but doesn’t have as wide of adoption as h.264, so it should be carefully checked if a particular target platform supports this codecs.

Choices for Desktop Media Systems

HTML5 UI that can be run from any mainstream browser is the most modern way to playback media content for desktop end-users. HTML5 standard provides special <video> and <audio> elements for native media playback support, but it works natively only for the “video on demand” approach. MP4 is supported in all mainstream browsers, while WebM is not supported in Internet Explorer, Microsoft Edge, and Safari.

So far HTML5 doesn’t have a standardized approach for “Live Streaming.” Before the end of 2015 Adobe Flash technology and RTMP (Real Time Messaging Protocol) were widely used to provide live streaming. Now, it is significantly changed. MPEG-DASH now supports all major desktop browsers via HTML5 Media Source Extensions and “DASH.JS” JavaScript library. YouTube and Netflix have been adapting this technology for the last several years and it seems MPEG-DASH has become a De facto standard for desktop browsers live streaming.

The important thing is that native HTML5 “video on demand” approach doesn’t support adaptive streaming. It means it cannot adopt media bitrate in accordance with client Internet bandwidth. This leads to MPEG-DASH technology being used for both “Video-on-demand” and “Live streaming” approaches.

Choice for Mobile Devices

HTTP Live Streaming (HLS) still remains the best choice for both “Video On Demand” and “Live Streaming” approaches for native applications on mobile platforms. Both Android and iOS platforms support this technology with their native SDKs and the stock browsers. HLS works with h.264/AAC codecs on iOS. Android is limited by h.264 baseline level 3.0 profile, but it allows other codecs including VP8/9.

Choice for Home Device Systems

SmartTV, Set-Top-Box and Game consoles usually support a wide range of formats, technologies and codecs. Despite the fact that several technologies cover most media traffic (over 75%), there is no best choice for such devices since it usually is a universal media player for many formats and content providers. Any modern home media device should support the technologies that are covered above without limiting the wide range of device capabilities.

Server Side Choice and Requirements

Both MPEG-DASH and HLS can use HTTP/HTTPS as transport protocol. That really simplifies the server side environment. Many HTTP servers support these formats or can be used as proxies to MPEG-DASH/HLS services.

Wowza streaming engine or an exclusive one built on open source software like FFMPEG and MP4Box can be used for transcoding and repackaging.

Both MPEG-DASH and HLS technologies assume streams bitrate adoption to fit client Internet bandwidth. This means that the access point must be able to simultaneously provide streams with different bitrates.

“Live Streaming” requires several encoding processes on the server side to meet the increased sever side hardware requirements, especially CPU core numbers. It may require several modern CPU cores to provide real-time transcoding of a single HD stream.

For the “Video-On-Demand” approach, streams with different bitrates must be prepared ahead of time and stored on the server side. This brings additional requirements for storage size, since several copies of the same original media with different bitrates must be stored.

In Conclusion

Any modern media and IP video system must consider most adopted and prominent technologies. Today these include MPEG-DASH, MP4 with h.264, and WebM with VP8/9. HLS is still strong and must be supported, but with MPEG-DASH continuously increasing pressure, it is not unlikely that MPEG-DASH may replace HLS in a few years.

Adobe Flash Player should no longer be used as a main player for any web content provider and it is likely that this technology will withdraw in the near future.

MPEG-DASH and HLS provide great ways for high quality media content services, but on other hand pose increased requirements for content processing and storage on the server side as well as higher complexity of a media system.

Hybrid Mobile Apps in the Real World

Nowadays most people have heard about hybrid mobile app frameworks and their advantages. There are so many of them and each promises everything you dreamed about for a cross-platform mobile applications development. The idea is nice and simple: hosting a web application inside of the user’s smartphone or tablet with the ability to have a single codebase, involve web developers, reduce costs, decrease time-to-market, and so on. However, we can see that most of the mobile applications are still native and platform specific. Why is that? Experienced developers (especially those who had a really hard time working with early hybrid applications) may notice that the real life cross-platform mobile app development process is far from perfect to make it a standard. DSR Corporation team has rich experience in PhoneGap application development and knows all (or almost all) pros and cons of this technology. In this blog we aim to summarize our experience and share the risks that should be considered when using a hybrid mobile app frameworks.

“Like Native” Still Doesn’t Feel Native

After your hybrid application is ready and deployed I bet the first complaint you will get is the performance. The modern day user has zero tolerance for even a slight delay, especially if he or she is expecting to be entertained. Just try to scroll a list in a native iOS application and then in a hybrid one: you will probably not say the hybrid has delays but it feels different and users notice that.

Additionally, if you want to use the same codebase for all platforms, then your application will look the same on all platforms. Sometimes it is good and desirable. Even if it is not, your application will still work fine. But as you may know the UX/UI guidelines are different for each platform and each user base has specific expectations on how apps work on their respective platforms. For example, an iOS user expects a “Back” button everywhere, where Android users may not find the search if you implement it using the iOS style. So when you are implementing the same UI, you are making a compromise or spending extra effort to make it different.

You Can’t Get Away from Debug

If it works on one platform, it does not mean it will work on another. Each hybrid app has its native part and framework development teams work hard to make them work the same way on all platforms so you could relax and concentrate on HTML, CSS, and Javascript. However, even if the framework you picked is “perfect,” each platform is assumed to have its own browser. Moreover, if you extended the native part, for example, by writing your own PhoneGap plugin, you will have to carefully port it to each platform. The situation gets more complicated when dealing with low cost or old devices: each one of them can have its own surprises. In this case you may need to research the Crosswalk pluggable webview (https://crosswalk-project.org/) to make it easier to maintain your app.

As a result, this is actually a normal case for multi-platform development when “write once, run everywhere” turns into “write once, debug everywhere.”

Platform-specific Nuances

In a perfect world a web developer can develop a cross platform mobile app without learning anything else. Meanwhile, using a hybrid framework, please make sure the team is armed with at least the following knowledge:

  • Environment configuration. At the very beginning, you can rely on a cloud build service. But sooner or later, even if you work on something simple, you will need to debug your application to make sure there will be no surprises the day before delivery.
  • Native support. It would be nice to rely on an experienced mobile application developer on your team. And it will be 100% necessary if your hybrid framework does not provide something you need.
  • Offline mode. Make sure you are developing a mobile application, not a mobile website. It will be nice if it works offline or at least provides a clear message that an Internet connection is required.
  • Guidelines. The users will very much appreciate if you meet their expectations. Everyone likes something he or she is used to and that feels natural with how they use their device.
  • Interact with others. Make sure you know and interact with typical platform applications that are used every day. Building good neighbor relationships is a good habit.
  • Deployment and approval. It can be a real pain for those who are doing it for the first time. You may need to read vendor requirements carefully even before you start the development: some rules can change your plans.

Keep it Simple

Please don’t expect too much from the hybrids. If you need something of flawless quality or great performance or exquisite look and feel – hybrid is probably not a good choice. Note that hybrid apps are fragile: the more complex architecture you have, the more effort you need to maintain it, especially if this is about interaction of HTML/Javascript part and the native part.

It’s Alive!

Considering the fact that you are still able to use native tools and controls you may think about combining web controls with native ones. Be careful not to create a Frankenstein: the application complexity can rise so significantly that it will become a nightmare to debug and maintain. First, the native control you used will need to be considered on other platforms (there is no guarantee that it will work the same way). Second, the integration part (between HTML/JS and native) is the “richest” source of bugs: the more complex the design is, the more “fun” you will have. Third, thinking about using native controls is a good signal that you need a native application. Consider your options and make the right choice for your app.

The Apple Dependency

Another risk that you may need to consider is a high dependency on the App Store Review Guidelines. The App Store is dominating the mobile apps market, so Apple can and does dictate its rules. The Review Guidelines change quite often and, although unlikely, it’s possible that hybrid apps can get banned from the App Store at some point in the future.

Now when you are aware of the risks you may wonder if there are any reasons for you to use hybrid applications. There are many of them!

Low Cost and Quick Time to Market

If you have a limited budget and need a mobile application, using a hybrid framework may be a solution. Easier UI development will give you a cost advantage even if you are planning iOS as your only platform. Make sure to keep it simple and that your single codebase can be easily ported to other platforms without significant changes. At the same time, reducing your effort can save you precious time, so you can deploy early and have the app ready before your competitors.

Startups and Explore

If you are planning to develop a serious application that requires a lot of effort for UX/UI you may consider prototyping it first. It will allow you to have better understanding of all application usage aspects and resolve problems early. Certainly, your prototype can be a quickly developed hybrid application you can use to get early adopter feedback and demo the concept.

Web Development Forces

If you or your team are good web developers, then you can use your experience in the mobile apps as well. Choose a hybrid mobile app UI framework you like the most and do the main implementation using your favorite tools. In most cases you will not need to learn a new programming language. At the same time, you will be able to develop for additional platforms improving your web development experience.

Better Than a Website

If you are thinking about developing a mobile website instead of a mobile app, then you should consider the serious advantages the mobile apps have:

  • Sales and distribution. Everyone knows about platform specific application stores that help users find what they want and allow developers to sell and distribute their apps with ease.
  • Usability. Mobile apps are optimized for use on mobile devices and users expect at least the same level of usability. It would be difficult to compete with a mobile application having only a browser application.
  • Works offline. Your users can work with the app even if the Internet connection is lost.
  • Using device capabilities. In contrast to websites, mobile apps can utilize a full set of device capabilities.
  • Popularity. Today it seems like a good habit to create a mobile app to meet user expectations.

Although we at DSR prefer to build native mobile apps because of their performance, seamless user interface, and best user experience, we have rich experience building cross-platform mobile apps as well. The latter can give you a serious advantage, but only if the related risks are considered. This is not a silver bullet, but it can be a powerful weapon on your side, if your project meets the parameters described above.

If you would like to learn more, have a project in mind, or want to share some comments, please connect with us at contact@dsr-company.com.

The Secure Remote Access Challenge for IoT Cloud

The Internet of Things (IoT) often brings us convenience, economy, fun, and security, but it’s also a source of numerous challenges for developers, installers, and maintainers. In this article, we are talking about one facet of the global IoT challenge – secure remote access.

Every small piece of a Smart Home, be it a Thermostat, a Security Sensor, or a Light bulb, has direct, or more often indirect, access to the Internet. Local or near-field security is a very important topic – but its meaning can’t be compared with security of access to the Cloud services responsible for configuring, notification, alarms, and all other things that make our homes smart. Personal computers, smartphones, printers, NAS have network connectivity that lasts a very long time, but we should not forget that compromising some of the small home devices mentioned above would allow an attacker some control over the physical world, which is definitely a different type of risk than associated with a personal computer.

For example, imagine an attacker has access to notifications from the home’s Thermostat. He can’t control the Thermostat, however, he has access to current mode and temperature. And using this harmless data he not only violates the abstract privacy, but most likely also knows the schedule of the house occupants, as well as if someone is home at that particular moment.

The recent research published by Symantec shows the following vulnerabilities are common for almost all Smart Home Solutions.

While passwords, encryption, account enumeration, and supply chain attacks are more or less obvious and are usually related to the user experience or the corresponding standards, attacks and issues on remote access security (including web vulnerabilities, mitm attacks, and firmware tampering) should be mitigated during design and development.

So yes – it’s recommended to have secure access from Smart Home devices or Gateways. And of course there are dozens of solutions suitable and secure, at least at the current technical level. However, sometimes even a security professional asking, “what to secure?” forgets about the “when.”

What percent of devices in the field are manufactured inside a vendor’s own facilities and prototyping factories? It’s hard to know the exact answer without using floating-point operations. And even the best scheme following all standards and guidelines can be compromised during manufacturing. So here’s where the challenge becomes really intriguing.

This leads to the following requirements:

  • Server side validation (e.g., server must be sure that the client is an approved device).
  • Client side validation (e.g., client must be sure it connects with the right server).
  • Client side security materials should not be accessible by the manufacturer.

With server side validation, everything is more or less standardized. The only thing required to add to the common pattern is custom security materials for each client for the purpose of client identification.

From the client side the solution is trickier – tens of thousands of devices are in sleep mode in a warehouse somewhere when it is discovered that the server is compromised, and, as a consequence, they can’t be reprogrammed. This leads to an additional server validation service. It can be, for example, a dedicated OCSP server or some custom solution with only one function – inform the device that the server’s security materials are compromised.

When talking about compromising during manufacturing, there’s another well known, but not so widely used option – updating security materials when the device is installed. It may be manual activation via the web or just an update on first connection.

In summary:

  • All clients should have pre-programmed security materials containing unique ID for each client that should be updated as soon as the device is installed.
  • Server should have validation scheme for each client. Something simple like white list is more than enough.
  • Separate validation service should be implemented to allow clients to at least detect that the server has been compromised.
    • Note that for better security, it may be reasonable to set the lifetime of the security materials used for access of the validation service to a reasonably short value. For example, instead of years usually used for main services, use 20-30 days.

These 3 simple principles make the entire system much more secure and, as a bonus, this scheme can be implemented using open-source software as described below.

Sample Security Solution
filename

Note that the scheme above is just a sample solution; the services can be replaced with some custom implementation or appropriate analogs.

  • Root CA – In Public Key Infrastructure (PKI) acts as Root Certificate Authority – it signs certificates for Manufacturer, OpenVPN server, and OCSP responder. In addition, you should maintain the list of compromised and expired server certificates as part of the Root infrastructure Certificate Revocation List (CRL).
  • openvpn-server.com – machine (or number of machines) that runs OpenVPN server and Application Server.
    • OpenVPN Server handles VPN connections from devices. Optionally, it can check if device ID extracted from the certificate is listed in the known device list. The device list is provided by the Manufacturer and contains IDs of issued devices. This list can be used to control number of devices issued by the Manufacturer.
    • Note: Server always “knows” if the certificate is issued by the Manufacturer or by the Root CA and can replace certificate on the device after the first successful connection.
  • Manufacturer – 3rd party in the PKI acts as an intermediate Certificate Authority – it issues certificates for devices. In addition, Manufacturer should maintain the list of IDs for all issued devices and provide this list back.
  • Field device – runs different applications. Application sends the gathered info to Data Server performing the following steps:
    • establishes tunnel to OpenVPN server using OpenVPN client
    • checks (using request to OCSP server) that OpenVPN server certificate was not revoked
    • sends data using VPN tunnel to Data Server
    • closes VPN tunnel
  • ocsp-server.com – Instance of OCSP Server.
  • OCSP Server – Online Certificate Status Protocol responder. This is the special service that can be used to check if the OpenVPN server certificate was revoked.
    • Note that the OCSP certificate is equally important as the Root CA certificate since it can be used to block all VPN connections. So it is good idea to run the OCSP service on a separate machine where no additional services are running.

Why You Should Consider Node.JS as a Backend Option for Your Project

I. HISTORY OVERVIEW

In the early 2000s, when the web was small, servers were slow and client machines were even slower, developers faced the C10K problem. The problem was about concurrently handling 10,000 client connections on a single server machine. As a solution, multi-process and multi-threaded architectures (a new process or thread per each request) became very popular in mainstream software platforms for web development.

But the web continued to grow (and it still does), C10K goal became implementable on most software platforms and frameworks and the community stated the next goal – C10M problem. As you might have guessed, it’s about dealing with 10,000,000 concurrent client connections, which is a tremendous load.

The C10M problem is still relevant now and developers need new solutions to reach much higher load requirements. Most modern web apps include RESTful backend along with web and mobile apps that consume backend API. This increases load that the backend server should be able to hold.

Async I/O based platforms were created to help developers reach the goal. In this post we’ll talk specifically about Node.js as our preferred platform for high load systems development.

II. NODE.JS OVERVIEW

Node.js was initially created in 2009 by Ryan Dahl and other developers working at Joyent company (main contributor of Node.js). The idea was to use extremely fast Google V8 JavaScript engine and implement system libraries that provide common APIs which are absent in browser environments (as browsers provide sandboxes for JS code actually), like file manipulation, HTTP client/server libraries, and so on.

Because of async nature of JS (which is executed in a single thread and does not support multithreading at all), all of Node.js system libraries provide evented and asynchronous API for I/O operations, like reading a file or sending an HTTP request.

So Node.js itself might be described as an asynchronous event-driven platform. Code written by developers is executed in a single thread and, speaking of web backend development, at each moment of time only single client request is being processed, while all others are idle. However, due to the async nature of Node.js, all calls that are potentially blocking, like execution of SQL queries, have event-based API and lead to switching from current request processing to next awaiting operation (all of this is handled by the event loop).

test

III. COMPARISON WITH “CLASSICAL” SOLUTIONS

Let’s look at “classical,” thread-per-request based solutions, like Java servlet containers (Tomcat, Jetty, etc.) or Apache web server. By default, the whole thread is blocked on I/O operations here. Thus, maximum load that can be handled by a single web server instance is bounded by the maximum amount of threads that the server can handle.

Apart of the fact that creation of a new thread is relatively expensive, threads have quite a heavy footprint themselves. For example, each Java thread requires 1024KB of memory for its stack in 64-bit JVM, so 10K threads require 10Gb of RAM just for the thread stack.

So building highroad systems with such software platforms is still doable, but it’ll require many more server instances to handle the same load.

IV. OUR EXPERIENCE WITH NODE.JS

At DSR we have implemented several projects with Node.js-based backends and have been very pleased with the scalability and performance of the platform.

The strength of Node.js is the async, non-blocking I/O nature. Modern RESTful backends do a lot of I/O operations mostly, without involving heavy computations, and Node.js shines here. While reaching C10K goal requires certain effort from the developers on multi-threaded platforms, it’s just the worst performance level in case of Node.js.

Our experience confirms that Node.js is a stable and developer-friendly platform. Since Node.js 4.0, there are LTS releases now, which are stable and supported for a long period of time. As for developers, we find built-in platform tools, like npm, very handy, and, as for the code, promises cooked with generators are just simply awesome.

Of course, there are some negative points, like small amount of production-ready 3rd party libraries/frameworks. But the platform is rather young and Node.js ecosystem is rapidly growing and maturing. Growth of npm package count also indicates this well enough (see the chart below).

file

We’re actively participating in the Node.js platform community and are looking forward to building more great, high load systems using it.

ZigBee 3.0 Evolution of Things

Another CES has come and gone. The wheels have touched down, and you are likely back home. You and your team have refueled with a few well deserved, solid nights of sleep and it’s now time to reflect on what made CES 2016’ special. Let’s highlight one of the exciting moments; the ZigBee Alliance announcement of ZigBee 3.0.

With ZigBee 3.0, there is no reinvention of standard, sudden updates, or unpredictable changes – we are looking at the refinement of a proven technology. Using natural selection as an example, we are watching the substantial evolution and adaption of ZigBee with the IoT market confirming the technology maturity. Observing changes in the wireless technologies and connected IT business environments, we are tracking the reactions of the ZigBee standard in response to this.

The key features of ZigBee 3.0 include dramatically improved interoperability and strengthened security. DSR has been continuously involved in the implementation of ZigBee Pro since the 2006’ standard and we can confirm that the new features in ZigBee 3.0 are a real game changer, especially the convergence of the application profiles to a unified base device implementation. At first glance, this change is the kind of revolution in ZigBee that casts doubt on the previous specification. We do not view it this way.

Earlier, when the profiles were developed, the market was a union of isolated areas. Which areas, you might ask? Well, let me challenge you to quickly recite the ZigBee profile names. If you’re like us, you don’t like separate smart home, light control, or energy measurement functions. We want the Internet of Things and we now have extremely inexpensive, more powerful microcontrollers to build it with. We don’t need profiles anymore. We need the unified implementation enabled by ZigBee 3.0.

Structural consequence of the profile evolution into a base device approach is strengthening the role of a cluster as a unified application building block (clusters were developed for this, of course). ZigBee Alliance goes further and standardizes device types. For us, this approach becomes quickly rudimentary because all the tools are ready for dynamic discovering of devices. We’re talking about EZ-mode commissioning that is now able to discover all the features of the added device right at the commissioning step. After finding and binding, the application has full details about the joined device and bound clusters, so the device type information could be used only for predictions. What we would like to see instead of standardized devices is the strict, “survival recommendation” list for different groups of devices. For example, recommendations for implementing optional attributes/commands or, more specially, having poll control cluster for sleepy end devices, etc. (see our previous blog post).

Overall, a transformation of the profiles multiplies many times the core and indisputable advantage of ZigBee – mesh networks. Devices that previously joined the different networks will truly co-exist now. The new standard allows ZigBee to keep their status as one of the most energy saving choices. Moreover, with the Green Power feature in ZigBee 3.0, devices without batteries can operate in the network.

In conclusion, to all the benefits of ZigBee 3.0 painless backward compatibility and OTA Upgrade feature guarantee, that neither user nor developer will have trouble with switching to the new standard or supporting old devices. What is the best, now only a ZigBee sign on the device’s box makes sense: not profile, even not ZigBee PRO or ZigBee 3.0. For example, how often do you care about 1.1, 2.0 or 3.0 USB device you buy? That is the same.

What do we have as a result? The mesh self-healing network of green, low-power devices with the unified easy installation mechanism, growing community, and continuous evolution. Isn’t that a synonym of IoT?

The Real Reasons Behind Most ZigBee Interoperability Problems

Interoperability is a buzzword that we hear often when talking about wireless protocols, including ZigBee. Being an already trusted but still young standard, ZigBee itself can raise many questions when reading the official documentation. However, that is not the topic of this blog. With over a decade of experience in wireless communications software development and 7 years working closely with ZigBee, we have seen many cases where although the specification gives adequate description, developers invent their own bicycle. Our extensive experience integrating and working with a large number of sensors from different manufacturers provided us the valuable insight we are sharing in this blog.

The field where there is so much space for creativity and hence mistakes, is the application layer, when profiles join the game.

Let us start with one simple flag – “Manufacturer specific” flag in ZCL header, invalid usage of which may cause a variety of problems. The right way of using it is extending the functionality of ZCL (HA), adding attributes or whole clusters that are not provided officially. For example, we cannot guess, why “Temperature measurement” cluster has a “tolerance” attribute, while the “Humidity measurement” does not. It is about the fact that if you want “Tolerance” attribute in your humidity sensor, you need to make a manufacturer specific attribute. Or, in another example, let’s say you are working on a ZigBee-based pet tracking system. We promise there is no “Animal tracker” cluster in any specification. You will need to implement it yourself and, yes, it will be manufacturer-specific.

The common mistake of using this flag is marking general attributes and commands with it. We faced it while working with IAS sensors and made us wonder why the standard enrollment procedure needs any manufacturer code. Do developers really consider their manufacturer code safer from intruders than the entire ZigBee security system?

Anyway, it can be easily debugged, because the only thing we need to know in this case is the manufacturer code. There is a way to obtain it using only ZigBee tools: the code is placed into the node descriptor. If the node descriptor does not work, it can be requested from the manufacturer. And, when there are no contacts, ZigBee sniffer can help too. If there is a coordinator that the intended device successfully enrolls with, then with the proper enrollment procedure caught by the sniffer, we will get the code. Another way to achieve this is by writing any attribute in the intended cluster and probably getting the response with the code. Moreover, configuring and binding the intended cluster may cause some manufacturer-specific attribute to be reported with the code. So, they key is just to be patient.

This mistake may be worse when the device confuses ZDP discovery tools: for example, the cluster is not returned in a simple or match descriptor response, but some commands are supported and they are manufacturer-specific. In this case, discovery does not work and you will need either a technical contact or a lot of time to experiment.

In this case all we know is that the device in our hands is a ZigBee device and what it is used for. So we can predict its cluster list. The only thing we can do without manufacturer help is to send commands to the predicted cluster waiting for the response with some status.

The next issue has to do with attribute semantics misunderstanding. When the number of attributes exceeds two-three and cluster logic becomes complicated, this can lead to misunderstanding of an existing attribute’s meaning . Just imagine the situation when you try to set temperature on a thermostat but it is still too cold or hot in the room. Now we take this HVAC system and try to guess, which setpoint the “Setpoint Raise/Lower” command operates with? It depends on the command’s mode as well as current system’s mode. But some developers may like only one clear attribute and of course it will cut the existing logic. In this case, specification misunderstanding can even cause attribute duplication.

One of the last common problems has to do with a very useful HA extension – poll control. Even though it is strongly recommended to implement it, it is often ignored. However, real problems come, when the device has its own long poll interval that is much longer than the default one. If we leave the situation as is we will for sure have many packets lost for such a sleepy device. Therefore, we should increase the timeout for deleting expired indirect packets. This does come with a risk: if the interval is too high, the queue most likely will got overflowed. That is why when increasing the indirect queue timeout, updated coordinator should be tested in a large network with a lot of sleepy devices connected.

To close, we want to add a few words about the mistake that will not break interoperability, but can be frustrating and easily avoided. Unfortunately, as of today we do not have as many reportable attributes as we may want. And everybody who faces this problem solves it in his or her own way. We have seen “Write attributes” sent to the client cluster and even reports that were not configured. It is the only problem described here that can be attributed to by the lack of functionality in the official specifications. We are sure this will be addressed in one of the next updates. But we are sure that the devices that skip the configure/bind logic before sending reports will not disappear for many years.

We hope this blog gave enough examples to show that most interoperability problems at the application layer appear because of not completely understanding ZigBee Alliance documents. With the growth of ZigBee technology and the number of well-designed devices, such misunderstanding may make the product less competitive and supported. It is key to take time to understand and follow the standard to avoid these issues and ensure the success of your products.

Latest Custom Software Applications for Media & Entertainment from DSR

As part of our blog, we like to share our recent experience in various industries. Below are two projects that we have worked on in the Media and Entertainment industry.

SDI Graphics Insertion

DSR has recently worked on a project whose purpose was to combine OpenGL application graphics output with 3D video content. High Definition 3D video content was provided real-time as two video streams via Serial Digital Interface (SDI) as unpacked video frames. OpenGL graphics were generated on the fly and the current OpenGL frame corresponding to the current video frame of the 3D content. The output of the combined content was an SDI stream with the same parameters as the input.

One of the projects requirements was to not have more than 1 frame difference between the SDI input and output streams, as well as 2 frame difference between OpenGL output and 3D content.

DSR developed a library that is linked with the OpenGL library and that takes the OpenGL output and combines it with the SDI stream real-time. AJA Corvid44 card was used to work with the SDI functionality. Because this card has a powerful Mixer for video content with Alpha-channel, we were able to use hardware blending that consumed neither CPU nor video card GPU for that operation.

As the project result DSR provided the library with a convenient API, non-blocking architecture and the required differences between input and output frames. Integration of that library did not require any OpenGL application architecture or graphics drawing changes, only slight OpenGL configuration tweaks were required to allow the library to get content in the format it needed.

Automated Datascraping

Another recent project DSR worked on required automation to analyze the online stores TV content for presentation and cost validity. All analysis data, including screenshots of a web page with a particular TV show, had to be inserted into the database to be reviewed by an operator later via an already existing system UI (this is where all analysis work was performed manually in the past).

For this project DSR proposed Selenium technology that allows a web browser to run in a controlled by program environment. Having this technology, a software engineer can emulate TV show searching and its web page analysis, and accessing web page document object model that browser operates using code.

Such an approach can be scaled by having several instances of analysis script with Selenium running, so that can optimize the total analysis time when many TV shows and web stores must be processed.

If you any of this experience is interesting to you or if you have any questions, connect with us at contact@dsr-company.com

Java Enterprise Development – the Technological Journey

At DSR, our expertise with Java Enterprise technologies, Java application servers, and servlets containers is often called upon by a wide array of software development projects. Our trials and tribulations in software development are many, but today we have adopted the Spring Application Framework as the DSR standard for enterprise level applications.

Let me explain why.

In the Beginning, there was Glassfish v 2.0 with SOA and ESB

In 2008, DSR was asked by the digital media industry to create a distributed solution that allowed users to work simultaneously with video editing software tools like Final Cut Pro, and a digital media archive that housed a vast amount of content.

To develop the solution, we started with Glassfish v2.0 having Service-Oriented Architecture (SOA) using the Enterprise Service Bus (ESB), and Business Process Execution Language (BPEL). DSR released several product versions and successfully supported consumers through 2012.

From an engineering point of view, we were satisfied with the technology stack but found that it was too complex of a configuration when using huge XML documents.

From a project management point of view, I believe we could have chosen better after reviewing the cost/efficiency ratio. In hindsight we realized project objectives could have been met with less engineering effort if a less complex stack would have been chosen.

Just a Few Years Ago, There was Spring, GWT, and Tomcat

In 2011, DSR was engaged to create a web-based service with stock exchange logic in the background. Taking into account our earlier learning experience with XML documents, we looked for a powerful, less complicated Java stack that allowed us to build a modern and scalable web-application.

Several server-centric technologies were tried, including, JSF (Java Server Faces) that is provided as par of Java Enterprise Application Server. However, we soon realized our approach didn’t meet the project needs for a good user experience and scalability. We quickly uncovered a client-centric technology and determined the GWT (Google Web Toolkit) showed the most promise in our tests.

For the back-end, we had to fit the integration with a relational database (MySQL or Oracle) and provide a solid infrastructure to manage the solution. After extensive research, we chose the Spring Framework in conjunction with Hibernate, and Tomcat as a servlet container.

From both an engineering and a project management standpoint, we were satisfied with the technology stack and found Spring Framework to be an integration solution that embeds the best parts of Java Enterprise libraries in tandem with a high degree of flexibility to add additional components.

Today, Evolution to Spring and Jetty

In 2012, DSR participated in developing an aircraft engineering configuration software used by workgroups. The engagement had a huge domain model (from a device down to a network package), concurrent versioning functionality with branches, dynamic ACL and rich client side UI to create and manage an Aircraft Engineering configuration.

Initially Glassfish v3 with EJB 3 was chosen for the project. Eclipse Modeling Framework was selected to deal with the domain model, while Eclipse Rich Client Platform provided a client side application. We created a custom JPA persistent provider implementation allowing for a persistent EMF based model through Hibernate. We also used a continuous integration approach based on Jenkins server.

For integration testing, we selected Arquillian technology and after six months of active development, we faced a speed issue in which our integration tests simply took too much time to execute. With a team of 10 engineers, we had several hundred integration tests with 4-hour run cycles that completely blocked our continuous integration approach. In short, our continuous integration server tested the project slower than it was developed.

While investigating solutions (including embedded Glassfish) we uncovered no viable answer. We found the issue could be solved by migrating with Spring while employing a standard Sprint testing approach. Fortunately, EJB 3 is very similar to Spring (actually EJB 3 shows a clear influence by Spring) and we only spent about 2 single-engineer work weeks to port the solution to Spring and its test approach.

Glassfish v3.0 was also replaced with Jetty since we didn’t need a Java EE server to run our server application despite us still using Java EE components. After migration, we continued to develop the solution for almost 2 years, and after successful acceptance procedures, passed the results to the customer.

To conclude, I’d say that with several years of experience using Java Enterprise technologies we have proven that the Spring Framework is a very good choice in engagements where using Java EE platform is not a strict requirement. With Spring, we have satisfied all engineering and project needs faced thus far. Despite Java Enterprise Edition being a good standardization for most popular platforms, (Glassfish and JBOSS application servers used & tested) we found Spring allowed us to provide best in class results.

Spring for a Java Enterprise level solution has become our recommendation of choice, although we remain open to support our partner’s needs and use a Java EE platform where Spring cannot be.

Which Big Data technology stack is right for your business?

If data analysis is one of the core features of your product, then you probably already know that choosing a data storage and processing solution requires careful consideration. Let’s discuss the pros, and cons of the most popular choices, Redshift/EMR, DynamoDB + EMR, AWS RDS for PGSQL, and Cassandra + Spark.

Managed Amazon Redshift/EMR

Pro – It’s fully-managed by Amazon with no need to hire support staff for maintenance.

Pro – It’s scalable to petabyte-size with very few mouse clicks.
Pro – Redshift is SQL-compatible, so you can use external BI tools to analyze data.

Pro – Redshift is quite fast and performant for its price on typical BI queries.
Con – Redshift’s SQL is the only way to structure/analyze data inside Redshift. It may be easier for simple tasks, but to do complex tasks like social network analysis or text mining (or even running custom AWS EMR tasks) you have to manually export all data to external storage (to S3 for example). You then run all your external analytics tasks, and load results back to Redshift. The amount of manual work will only grow with time ultimately making the use of Redshift an obstacle.
Con – Redshift’ SQL dialect for data analysis used is also very limited (as a tradeoff for its performance), the main drawbacks are: missing secondary indexing support, no full-text search, and no unstructured JSON data support. Usually it’s OK for structured and pre-cleaned sterile data, but it will be really hard to store and analyze semi-structured data there (like data from social networks or text from webpages)
Con – EMR has very weak integration with Redshift: you have to export/import all data through S3.
Con – To write analytical EMR jobs, you have to hire people with pricey Big Data/Hadoop competence

Managed Amazon DynamoDB + EMR

PRO – It’s fully-managed by Amazon with no need to hire support staff for maintenance.
PRO – It’s scalable to petabyte-size with very few clicks of the mouse.
PRO – Pricing is opaque and it may be rather costly to run analytical workload (with full-table scans as for text mining) on large workloads.
CON – DynamoDB is a columnar NoSQL store. For most analytical queries, you have to use EMR tools like Hive, which is rather slow, taking minutes for simple queries that typically execute instantly on Redshift/RDS).
CON – DynamoDB is closed technology which is unpopular in the Big data community (mostly because of its prices). We’ve also noticed difficulty finding people with required competences to extend the system later.

Custom ‘light’ solution with AWS RDS for PGSQL

PRO – Postgresql is easily deployable anywhere, has very large community and there’s a lot of people with required competence. You can use either hosted RDS version, or install your own on EC2 – it does not require any hardcore maintenance (like own custom hadoop cluster) and just works.
PRO – RDS Postgresql supports querying unstructured JSON data (so you can store social network data in a more natural way than in Redshift), full-text search (so you can query user’s friends for custom keywords), and multiple datatypes (like arrays which are very useful for storing social graph data).
PRO – Has full-featured unrestricted SQL support for your analytical needs and external BI tools.
CON – PGSQL is not “big” data friendly. Although versatile for small to medium data, our experience has uncovered difficulties when scaling for large datasets sizes. Scaling may be a serious issue later and require non-easy architectural modifications in the whole analytical backend, but may speed up development if data size is not an issue.

Custom ‘heavy’ solution with Cassandra + Spark:

PRO – Cassandra + Spark can easily handle storing and analyzing petabytes of data.
PRO – Cassandra deals with semi-structured data well, which comes in handy when storing social network data like user’s Facebook wall posts, friends, etc.
PRO – Spark has good machine-learning (for example, dimensionality reduction) and graph-processing (useable for SNA analysis) libraries included. Also has python API to use ant other external tools from numpy and scikit.
PRO – As a self-hosted solution, Cassandra + Spark is much more flexible for future complex analysis tasks.
PRO – Spark has SparkSQL which is an easy integration add-on for external BI tools.
CON – Cassandra may need higher tier competences for challenges that arise when scaling which, in turn, may require additional investment in support staff supervision.
CON – Spark is a rather new technology, but it has already positioned itself well within the big data community as a next-gen Hadoop. At present, it may be hard to find people with Spark competency, but the user community is quickly growing, thus making skills easier to find as time passes.

To Conclude

The final choice is dictated by your current business priorities.

If you need to move forward fast with less maintenance routine and are not afraid of later technical debt, we recommend using the “Light” solution or Amazon DynamoDB. If your top priority is system scalability, then the ‘Heavy’ solution surfaces as the clear choice.