To Mesh and Beyond

Not too long ago Bluetooth® SIG announced that Bluetooth® is going mesh, giving a rise to a new wave of interest to Mesh networking. Although the interest is growing rapidly, solutions available on the market keep using just the trusted star topology. But what are the real possibilities?

Mesh, Ad hoc and MANET

Most networks on the market are declared to be “mesh ad hoc,” so in most cases these terms are used together in turn blurring the difference between them. But there is a difference and it’s important to highlight it.

Mesh network is a kind of a network topology where all the possible connections between nodes are established. This leads to the main mesh network feature – self-healing, where broken routes can be restored using different access links between devices.

Ad hoc network is a decentralized wireless network that does not require any infrastructure to form and maintain. Nodes connection depends on its possibility. This network is self-configuring, which means that devices can join or form it on the fly.

In this way, mesh network is the most robust static type of ad hoc networks. But when both terms are used together, they typically mean ad hoc only. Mesh explains just the physical layer of wireless communication that is broadcasting from its nature where all devices that are close enough hear each other (i.e., connected) and form enough links for self-healing. To be completely accurate, it should be mentioned, that “ad hoc” means that the nodes are stationary. There is a term for mobile nodes – Mobile ad hoc networks (MANET’s). But today in PAN/LAN context (Wi-Fi, Bluetooth, ZigBee) nodes are assumed to be static due to their use cases, even if they can be moved sometimes from place to place.

Wi-Fi

Wi-Fi is an area that already has ad hoc solutions available through documents and open source. Official specification IEEE 802.11S is the less effective and innovative one. It introduces two new kinds of devices: Mesh portals and Mesh points. Mesh portals are ordinary Access points with wired connection to the Internet. Mesh points act as wireless routers between stations and portals. Everything that has “mesh” prefix is connected together where it is possible. The standard is completely the same as B.A.T.M.A.N. adv Wi-Fi mesh that is already included in the Linux core.

In parallel, open source community works on cjdns (Hyperboria) that is a real candidate for the DarkNet set of protocols. Cjdns is developed in the way to create a wireless mesh network that is totally disconnected from the Internet. Its core advantages are:

  • End-to-end encryption
  • Tunnels between segments over the Internet
  • Decentralized generation of IP addresses

The last one is a headache for all Wi-Fi ad hoc networks. Old DHCP conflicts with the essence of the ad hoc network and mobility.

Mesh networking using Wi-Fi sounds ready but not for small low-power devices. Thus, we better pay attention to Bluetooth® Low Energy (BLE) and ZigBee®.

Bluetooth®

The first thing that Mesh-network-sceptics say about Bluetooth® is that it was not designed for Mesh networking. However it is widely spread, so why not to try using it?

Existing solutions on BLE are nothing more than trying to sell things that we already have in ZigBee® under the “Mesh network” label. To build a “mesh” the customer should buy a BLE gateway that forwards packets to the cloud. All main-powered BLE devices act as routers and interconnected with each other, while battery powered devices talk to routers only. Nothing special.

But BLE wins in that it is already in devices that have the Internet connection through 3G, LTE, Wi-Fi, and even the cable. That means that in theory the customer can get more than one gateway connected by the Internet. Moreover, customer’s tablets and smartphones bring the mobility to such network.

The power of the Wi-Fi + BLE collaboration has already been explored by Apple: check out the Multipeer connectivity framework for iOS 7 and, for example, FireChat application that proudly announces “Internet is not needed to chat.”

ZigBee®

When talking about ZigBee® one thing should be kept in mind – it was initially designed to be ad hoc. The routing mechanism implemented in ZigBee® is called Ad hoc On-Demand Distance Vector (AODV). Although RFC is operating IP frames, there are no major differences. The algorithm is quite simple for CPU and gentle to ROM and is available even for a bulb or smart socket or any other main-powered device.

As it was mentioned earlier, ZigBee®-based systems on the market currently prefer to use star topology, even though it has everything to be a mesh network and should be used as such. When Wi-Fi or BLE implement mesh, it is not only a technological step forward, but a marketing reason. The truth is ZigBee® is already a step ahead in terms of technology, but maybe a step behind in terms of marketing.

One might mot like that ZigBee® network is not using IPv6. Well, neither does BLE, but it does not disturb it. Nevertheless, there is IEEE 802.15.4 + IPv6 + UDP solution called 6Lowpan and Thread or JupiterMesh built over it. Though they haven’t still made a splash on the market, probably nobody has positioned them as “mesh.”

As we can see, if the market wants mesh/ad hoc/MANET, there are all the pre-requisites for it. It is already around but the customer is not aware of it because either the market is too “shy” or that field has not yet been covered in depth. Anyway, the results will come soon and they will come from Wi-Fi, BLE, ZigBee or even a collaboration between them.

Why Scala Is an Awesome Programming Language

No one will argue that IT is one of the fastest developing areas of engineering. New tools, approaches and ideas complete or even supersede the existing ones. One of the quickest growing technology stacks is the Scala language stack.  In this blog we explore what makes this language awesome.

Language Design

Scala was designed to help write thread safe and laconic code. It overcomes some JVM limitations and provides features that could not be achieved in Java. Scala has a clean, expressive, and extensible syntax with lots of built-in shorthands for most common cases.

Since less code needs to be written to accomplish the same task, the programmer can now focus on the problem’s solution instead of spending the time for boilerplate code. It is especially nice if you pay your programmers for SLOC. Furthermore, Scala reduces the number of places mistakes can be made and hence improves implementation quality.

Here is a short list of Scala language and compiler features:

  • Type inference – in most cases compiler can automatically detect types of values:

val i = 1 // Int

val s = “Hello, world!” // String

  •  Named and default function arguments:

class Point(x: Int = 0, y: Int = 0)

new Point(y = 1) // Point(0, 1)

  •  Tail recursion optimization – recursive function self-calls transforms to loops – no more StackOverflowError.
  •  Support of both imperative object-oriented and functional programming – in Scala object composition, methods, state encapsulation and inheritance, traits and mixins combined with lazy evaluations, algebraic data types, pattern matching, type classes and, of course, first class functions.
  •  Lots of immutable and mutable, finite and infinite collections with many implemented transformations like map, reduce and filter:

users.filter(_.lastVisitDate before today).map(_.email).foreach(sendNotification)

// Sending notifications to all users that visited our  site   too long ago

  •  Concurrency through mechanisms of actors and futures:

val sum = sumActor ? List(1, 2, 3) // sum == Future(6)

val x = sum.map(_ * 2) // x == Future(12)

  •  Generics – JVM does use type erasure and knows nothing about real types of generics in runtime, but Scala keeps that information.
  •  Compile-time meta programming – in addition to runtime reflection, Scala compiler supports macroses – functions evaluated during compilation.

Distributed Computations and Big Data

One of the fields where Scala found its widest application is distributed computing. Scala has great mechanisms for working with data sequences even in a cluster, which is why it is frequently used by Big Data engineers and data scientists. Here is a list of the most well-known technologies that use Scala:

  • Akka –  a framework for creating distributed systems. It is based on actors model and makes it easier to implement concurrent applications without race conditions and explicit synchronization.
  •  Spark – a very popular batch data processing framework. Spark has integration with different data sources such as Cassandra, HBase, HDFS, and Parquet files. In addition, it has a Streaming extension that provides tools for building stream processing pipelines. One of the most powerful features of Spark is the ability to run quick ad-hoc tasks to check hypothesis. Such functionality is reached by Spark’s design that in some cases gives more than 100 times performance impact in comparison with Hadoop jobs.
  •  Kafka – a high performance message queue, one of the key middleware components in data streaming systems. It is distributed by design and usually acts like a data buffer in streaming systems on the different stages of the processing and as a media between different system parts.
  •  Samza – one more framework for stream processing. It is similar to Spark Streaming but works in a different way. It does not create micro batches as Spark does. Instead, it processes data pieces as soon as they arrive, which makes Samza more preferable in some cases.

In addition to these tools, it is also possible to use Scala for implementing good old Hadoop map-reduce jobs, directly or utilizing Scalding. Anyway Scala is a great choice for data processing and distributed computing.

Compatibility with Java

One more important thing is compatibility with libraries written in Java. Java code can be used in Scala directly and without limitations. This allows to keep existing modules without the need to re-implement them. It could be quite helpful when it is needed to use a rare library, legacy code, or API that have implementations only for Java.

Why Scala?

To sum it up, Scala is well designed and very extensible. It has special processes (SIP and SLIP) that let any Scala developer propose enhancements. In conjunction with the large community, Scala ecosystem has been growing rapidly. Scala has its own stack of tools and is compatible with existing Java code. It brings new effective approaches that give the  programmer an ability to do their job more efficiently. All these features make Scala one of the most attractive modern programming languages.

Contact us at contact@dsr-company.com to learn more or if you have a Scala project to discuss.

The Next Wave of Media and Entertainment Consumer Experience

The coming transformations in the media distribution space provide ample opportunity for both vendors and operators to work together to improve efficiencies and increase user satisfaction on the consumption experience.

DSR believes that after the failure in uptake on in-home 3D, the media & entertainment market is primed for the success of the next wave of consumer experience improvements, which we believe will be driven by high dynamic range (HDR) video, continued over-the-top streaming distribution (including 4K video resolutions) and virtual reality. Beyond the consumer experience, the backend of media operations can be substantially enhanced and prepared for these changes by embracing IMF, IP ingest & playout and virtualization of media processes.

As a software engineering provider for media & entertainment vendors for more than a decade, DSR is uniquely positioned to provide our expertise in building applications and backend services to assist companies ready to embrace these transitions.

IMF, HDR, & 4K Video

Consuming video wherever, whenever has become mainstream, and enabling those consumption habits has increased the workload on media transformation by several times in the last 5 years. IMF (a SMPTE standard for Interoperable Media Format) finally holds a promise to simplifying versioning for this wide array of consumer consumption channels.

IMF can simply your media workflows by using a single package to hold an original version video, along with all possible substitutions and exclusions referenced by composition play lists (CPL). Since additional CPL are just XML documents and substitutions and exclusions are much smaller than creating new additional versions for each distribution point, IMF not only promises simplicity in distribution, but also a possible reduction in media storage.

BroadcastingIMF_VR_VM_BlogDiagram_pptx

Within IMF, it is also possible to handle 4K video and coming soon, high dynamic range (HDR) content, along with downmix instructions, so that a single file package can hold the true master content, as well as the recipes for creating premium and lower cost versions.

DSR has already built tools for customers based upon IMF parsing, packaging and file playout, and we can bring that expertise to your project as well.

Virtual Reality

Consumer appetites for new and exciting video experiences appear to be increasing, and virtual reality experiences are poised to fulfill those desires. Several challenges exist in preparing content, however, including:

  • Camera rig creation and assembly
  • Video splicing/stitching for seamless visual experience during user pans
  • Spatial distortion correction

DSR’s wide array of experience in handling video for ingest and playout puts us in a position to help advise and create applications within the virtual reality space, particularly when dealing with video and audio layouts.

Backend Media Process Virtualization

In the last several years, DSR has helped many of our clients migrate and manage their applications from on-premise deployments to virtualized deployments, both on the cloud and in local datacenters. Our knowledge of multiple hypervisor technologies allows us to be agnostic in helping our clients migrate applications.

DSR has a wide range of expertise in refactoring applications, removing tight couplings between user interactions and data processing to enable the addition of web services. Our database expertise and API knowledge also helps speed the transition of applications from those dependent on single machine processing to those that can scale between multiple virtual machines.

Whether your organization is facing the challenges of:

  • application refactoring for virtual deployments,
  • handling multiple versions of media with IMF or HDR, or
  • needs to bring a virtual reality application online quickly,

DSR stands ready to help. We have over 10 years of experience in media & entertainment applications, working with enterprise application vendors and start-ups, with a stable team of engineers that understands video, audio, captions and containers (and English). Let us bring our engineering team to help in your next project.

Contact us at contact@dsr-company.com