MedixSoft Login

Email Id Password
Keep me logged in

Don't have an account yet?open an account here.

ekōhī Technology Platform

Medixsoft always follow the latest trends, technologies and design patterns that emerge in software industry. That is why we are proud to announce the creation of a technology platform called ekōhī. Relying on best of breed technologies, our solutions have gained important benefits such as security, reliability, flexibility, scalability, fast time to market, efficient integration and interoperability. Careful design and a technology independent architecture are other important factors that lead towards the above benefits, while avoiding the well-known risks of early adoption of new technologies. This N-Tier architecture has given us the ability to implement solutions that exploit the benefits of various technologies such as:

1. HTTP based solutions providing interoperability and flexibility

2. TCP based solutions with WPF smart clients providing scalability performance and great user experience

Integrated Development Platform

.NET is a framework that covers all the layers of software development. It provides the richest level of integration among presentation technologies, component technologies, security technologies, and data technologies. Our business solutions are designed and implemented on .NET platform using Visual Studio .NET.

Web Based Solutions Development

Active Server Pages (ASP) allow for forceful page generation on the Microsoft Internet Information Server (IIS). ASP's are the presentation layer that allows the creation of sophisticated user interfaces on a browser.

Security

Using the Windows Communication Foundation (WCF) framework we provide secure solutions using various methodologies and technologies.

Transport Based Security and Message Based Security are the primary methodologies supported by our solutions, digitally signed with certificates.

ASP.NET's Memebership, Authorization and Role providers are the frameworks we also use to achieve our security goals.

We have developed our custom audit trail mechanism to fit the demanding need of our customers.

XML Communication & Web Services (SOA Architecture)

Extensible Markup Language, or XML, is a technology which has gained in popularity due to its flexibility and ease of integration. It provides a format for describing structured data and facilitates more flexible means of communication in the current generation of web-based applications.

Relational Database Management System (RDBMS)

1. Microsoft SQL Server is a complete set of enterprise-ready technologies and tools that help people derive the most value from information at the lowest total-cost-of-ownership. Enjoy high levels of performance, availability, and security; employ more productive management and development tools; and deliver pervasive insight with self-service business intelligence (BI).

2. High levels of performance, scalability, availability, and security for mission-critical applications delivered at lower total-cost-of-ownership by Windows Server and Microsoft SQL Server.

3. A comprehensive, interoperable platform that empowers IT to be more productive and agile.

With Microsoft Visual Studio, the Microsoft .NET Framework, and SQL Server, integrated development tools help developers quickly build rich, intuitive, and connected applications.

4. A complete BI platform that connects users to the right information at the right time to improve business decisions through familiar tools such as Microsoft Excel and SharePoint Server.

Powerful and flexible Reporting

SQL Server Reporting Services is a report generation environment for data gathered from SQL Server databases. Reporting services features a web services interface to support the development of custom reporting applications. Users can create reports and explore data with Report Builder, a familiar report-authoring environment with rich visualizations. Rely on a semantic report model, users build reports without understanding data structures.

Data Extraction, Transformation & Loading (ETL): SQL Server Integration Services (SSIS)

SQL Server Integration Services is used to integrate data from different data sources. It is used for the ETL capabilities for SQL Server for data warehousing needs. Integration Services includes GUI tools to build data extraction workflows integration various functionality such as extracting data from various sources, querying data, transforming data including aggregating, duplication and merging data, and then loading the transformed data onto other sources, or sending e-mails detailing the status of the operation as defined by the user.

Our solutions integrate with SSIS combining a unified user experience and powerful capabilities such as:

1. Speed, SSIS is one of the fastest ETL tool on the market.

2. Security, Packages are digitally signed for deployment.

3. Reliability, SSIS supports transactions, enabling resilient processes for execution.

4. Quick data access from multiple sources by using high-speed connectors. Supports Oralce, SAP, Teradata, OLE DB, text files, the Microsoft Entity Framework, and other common sources.

Unique Content Management Integration

We integrate Content Management based solutions of your choice providing flexibility to edit and publish data changes on your website. Through flexible reporting structure and implementation of workflows we hand you tools that are as unique as your business. Your data changes pretty fast and yet it doesn't require tremendous development efforts to make those changes publicly visible. It can be as easy and as simple as sipping your morning coffee.

Open Source Tools for Big Data Analysis

 

Open source tools are the lifeblood of big data. You can’t have one without the other (and, really, why would you want to?).

 

1)     Data Analysis and Platforms 

a)     Hadoop

The granddaddy of them all. With the help of simple programming models, Apache Hadoop is a framework that allows for the distributed processing of huge data sets across clusters of computers.

  To do this, Hadoop uses the parallel-processing system MapReduce. The large computational tasks of dealing with big data are divvied up into small jobs which are then shared out among as many nodes as needed, in any available cluster, to be executed or re-executed.

b)    Storm

Storm’s claim to fame is its ability to process unbounded streams of data in real-time, “doing for real-time processing what Hadoop did for batch processing.” It will integrate with any queuing system and any database system.

     Storm’s real-time processing capabilities come in handy when analysts are dealing with highly dynamic sources. Twitter, for example, uses it to help generate its trending topics. Other applications include online machine learning, continuous computation, distributed RPC, ETL and the list goes on.

c)     Drill

Hadoop is outstanding at handling massive batches of data, but exploring ideas with it can make the clock drag.Taking its inspiration from Google’s Dremel system, Apache Drill is designed to permit super-fast ad-hoc querying and interactive analysis of giant data sets.

     Drill can tap into multiple sources of data – HBase, Cassandra, MongoDB, etc. in addition to traditional databases. When you need a tool to scan petabytes of data and trillions of records in seconds, Drill is an excellent choice.

2)     Statistical Languages

a)     R

Where once there was S, there now is R. This freely available statistical programming language and environment is rapidly becoming the popular standard for statistics software and data analysis. It combines the fundamentals of the S programming language (created by John Chambers at Bell Labs) with lexical scoping semantics derived from Scheme.          

    Developers like it because it’s cheap, powerful and plays well with Hadoop.

3)     Data Mining

a)     Mahout

This Apache project aims to teach machines some of what humans do: to recognize patterns in data. The stated goal of Mahout, a machine learning and data mining library, is to build freely distributed and scalable machine learning algorithms. The current algorithms focus on grouping related documents together (clustering), learning to match documents to categories (classification), matching users to probable favorites (collaborative filtering), and identifying items that typically fall into groups together (frequent pattern mining).

    Though many of Mahout’s offerings are implemented on top of Hadoop using the map/reduce paradigm, it’s still pretty democratic. Contributors are welcome to introduce algorithms that run on a single node or a non-Hadoop cluster.

b)    RapidMiner

Once known as YALE (Yet Another Learning Environment), RapidMiner has been around since the beginning of the millennium. Its data mining system is available as a stand-alone application for data analysis or as an engine for integration into other products.

     RapidMiner pulls its learning schemes and attribute evaluators from Weka, and for statistical modeling, offers either the native Rapid-I scripting environment or the R language. Machine learning, data mining, text mining, predictive analytics – it’s equipped to handle them all. Written in Java, it runs on every major platform and operating system.

4)     Databases / Data Warehousing

a)     Cassandra

Apache’s popular key-value oriented database management system is built to juggle large amounts of data across multiple commodity servers. It prides itself on availability, scalability and fault-tolerance, and avoids bottlenecks and single points of failure.

Because data is automatically replicated to multiple nodes, failed nodes can be replaced with no downtime. That’s good news for data-critical applications.

     Cassandra started its life at Facebook, when Avinash Lakshman and Prashant Malik created it to power the Inbox Search feature. Today it’s working for Netflix, eBay, Twitter, Reddit, Ooyala and more. The largest known Cassandra cluster boasts over 300 terabytes of data in over 400 computers.

b)    Hive

Apache’s data warehouse infrastructure is designed to facilitate querying and management of large data sets in distributed storage. Built on top of Hadoop, it allows you to overlay structure on a variety of data formats and provides tools for data querying and analysis.

     Hive’s versions as far back as 0.8.x are compatible with Hadoop 2.0. As of late 2013, Hortonworks is looking to boost Hive’s power with the Stinger Initiative. Among other improvements, Hive will use YARN to query Hadoop directly and add a new runtime framework, Tez, for greater efficiency.

c)     OrientDB

OrientDB, a NoSQL DBMS written in Java, provides the schema-less flexibility of document databases, the complexity of the graph model with direct relationships among document records, and object orientation for added power and flexibility. 

       In addition to schema-less mode, OrientDB also functions in schema-full or hybrid mode. It ensures reliability with ACID transactions and multi-master replication, and it’s fast – storing 150,000 records per second on ordinary hardware.

d)    HBase

For random, realtime read/write access to big data, many data miners turn to Apache’s HBase, the Hadoop database. Written in Java, this scalable, distributed database is modeled after Google’s BigTable.

      Running on top of Hadoop and HDFS (Hadoop Distributed Filesystem), HBase is particularly good at fault-tolerant storing of sparse data (i.e., small bits of information caught within a larger collection of empty or unimportant data that must be waded through efficiently).

5)     NoSQL

a)     MongoDB

MongoDB is a cross-platform, document-oriented database system and currently the most popular NoSQL database.It ditches the rigid schemas of RDBMS in favor of a binary form of JSON documents – with dynamic schemas, giving data miners a lot of power.

       You’ll find MongoDB at work in Shutterfly’s photo platform, eBay’s search suggestion, Forbes’s storage system and MetLife’s “The Wall.”

Even better, it’s recently been updated. 2013 features include:

1) A faster JavaScript engine (V8)

2) Text search (beta) and geospatial capabilities

3) Concurrent index builds and many more

b)    CouchDB

Apache’s NoSQL database uses a trio of components making it extremely Web-friendly:

1) JSON for documents

2) JavaScript for MapReduce queries

3) HTTP for an API

In his comparison of NoSQL databases, Kristof Kovacs points out that CouchDB works well with data that accumulates, changes occasionally, answers to pre-defined queries, and needs versioning as a priority (e.g., CRM, CMS systems).