Do Not Pay for High Speed Local Internet Speeds You Will Never Use

I have been watching all the commercial on TV for fast Internet service. All of the companies have a hook. I like the one that shows a family in a nice new modern house with all white furniture and carpeting. They are sitting around in their brand name clothes using the latest electronics watching videos and playing games. The ad implies how happy they are to have their new super fast Internet service. Well, my wife and kids along with me have a local Internet service that is a lot cheaper and just as effective.

We do not have white furniture and carpeting, and we buy a lot of our clothes from department stores. However, we do need fast Internet service. The kids are really into photography, videography and other things that take up a lot of bandwidth. I access and download large data files from work when I am telecommuting. None of us has any problems with our local Internet service speeds. Continue reading “Do Not Pay for High Speed Local Internet Speeds You Will Never Use”

Stock Price Evaluation: Earnings Per Share and Diluted Earnings Per Share

There are many ways for investors to evaluate company profitability and stock prices. In fact, it is suggested by many advisors and analysts that multiple financial measures be used to fully understand a company’s existing and potential performance that could lead to an increase in dividend payouts and returns from an increase in stock price. Two of these important measures are the earnings per share (EPS) and diluted earnings per share. Both are a ratio that reflect a corporation’s net income and allow investor’s a simplified way to compare the stock price and performance of different companies.

Earnings per share and diluted earnings per share are calculated ratios of a company’s net income to the number of common stock shares outstanding. As stated above, the EPS figures reflect a company’s profitability, so a higher EPS can indicate higher net income. When comparing two or more stocks, the EPS allows for a basic comparison of the companies’ earning potential. For example, if someone were reviewing two companies in the same industry and sees that Company A has an EPS of $5.00 and Company B has an EPS of $10.00, it would be clear that Company B is simply earning more money per share than Company A. This is not to say that Company B is actually more profitable, it could simply have less issued shares than Company A.

Diluted earnings per share is calculated the same way as basic EPS in relation to the number of shares outstanding, however, the math used for the amount of shares outstanding is taken a step further. Under diluted earnings per share, any issued long-term debt (bonds/stock options) or convertible preferred stocks must be accounted for in the amount of shares outstanding. This causes diluted earnings per share to be less than basic EPS in dollar amount, but not necessarily less important or a reflection that the company stock is over valued. Actually, some investors or analysts prefer to base investing decisions from the diluted EPS figure since it reflects an entity’s use of various stock options and shows a worse case scenario for pricing if all options were to be put into place.

Sometimes both basic EPS and diluted EPS will be taken a step further to evaluate an entity’s future performance. These predicted calculations will use expected future net income in order to show a possible increase or decrease in EPS. These figures are another matrix that investors can use to easily compare and contrast a company’s performance from today to a point in time in the future, usually one fiscal year. The hope is that investors can make easy value determinations of their stocks based on the expected future earnings by using simplified ratio matrices.

Some may argue that EPS is the most important figure available in evaluating a company and their stock price. At the end of the day, investors simply want to know how much money the companies they have invested in are earning, and the EPS figures put that in an up-front and easy to understand number. The EPS is used directly to calculate a stock’s price/earnings (P/E) ratio. While the P/E ratio is another very important evaluation number, it would require it’s own article for full explanation, the EPS is a factor in calculating the P/E ratio so therefore some analysts rank the EPS higher in importance. The P/E ratio tells an investor how much they are paying for $1.00 in company earnings by purchasing the stock. The use of EPS in this ratio ties them together in the evaluation of a company’s net income and determination of how expensive a share price truly is.

The basic earnings per share and diluted earnings per share figures are just two of multiple numbers, figures, and matrices used in determining the true value of a company, its share price, and potential return on one’s investment. As stated before, the EPS should not be the only factor used to finalize an investment decision, but it may be the most important. The EPS may be the most direct way to answer the question of how much a company makes and what that entity is worth.

Information Sharing – The New Intelligence Capability

Introduction Never has there been a more urgent time to ensure that the UK has a responsive and joined-up approach to its security challenges, than in the early years of the 21st Century. The asymmetric nature of the threats we face, whether they are man-made or environmental, physical or virtual, requires that the security & resilience community acts on intelligence from an increasingly complex network of information proactive and reactive sources with a greater level of speed and accuracy.

UK Security Challenges

1. The need for speed The enemies we face today are resourceful and, although they implement their plans with varying levels of effectiveness, are able create or change tactics and plans with alarming speed and in apparent unpredictable fashion. This is a pace of change that we are currently unable to match which means that the best laid plans could be redundant before they are started!

2. Providing analysts with information to act upon The culture and operations of government departments and agencies charged with the security and resilience have evolved over many years. However, this has tended to be in a partitioned manner which mitigates against seamless co-operation, collaboration and information sharing. The stakeholder community is powerful and immense. By default though, it’s comparatively cumbersome compared with the enemy we face.

Industry must therefore help government introduce information sharing measures between departments (while still maintaining the integrity of the source information) that enable analysts to make decisions, not manage information.

3. Information system procurement At the same time, we should also consider how we manage the procurement of complex information systems. If we are assuming that we struggle to respond to security threats, we must ask whether the processes we undertake to define our requirements, build and integrate our information systems lend themselves to implementing new capability quickly.

If current methods hamper the way we respond to security challenges facing us, perhaps we could harness the inherent power and capabilities of the state organs in a way that allows information to be more effectively accessed, assessed and acted upon?

Shift Happens American lecturer Karl Fisch’s globally acclaimed presentation ‘Shift happens’ demonstrates dramatically just how quickly the information age, and the technology driving it, is changing the world of tomorrow, today!

In light of Fisch’s assertions about the pace of technological change, industry cannot be allowed to provide IT solutions that are out of date before the ITT is published.

Similarly, if government finds it challenging to improve the inter-departmental and agency collaboration and co-operation needed to meet this pace of change and the unpredictable nature of the threats faced, it must consider an alternative approach to a solution – something which already helps the way the world rapidly shares information… the Internet.

The Internet has revolutionised our lives in many ways. The one relevant to information sharing is its ability to enable technology at different levels of evolution to be used to connect individuals and business together. Not having identical computers, applications or indeed levels of security is not a barrier to accessing the information in the same way

Therefore if we can all gain access to information using widespread and commonplace NET technologies, our ability to improve the quality of our intelligence should not mean we have to reinvent the wheel to do so.

Adopting best practise from the US This view of information sharing / intelligence gathering was first seized upon by the US following the atrocities of 9/11. The US Office of the Director of National Intelligence (ODNI) reviewed the culture and processes of their Counter Terrorism (CT) machine, and enforced unilateral changes across its homeland security community. The ODNI rewrote policy and changed the culture, recognising that if it proved the appropriate technology, cultural culture would happen automatically. They understood the nature of the young analysts now delivering the information sharing; by providing them with common architectural backbone, the analysts were able to use commonplace NET technologies through which to forge new relationships, and through these relationships they could share information.

The architecture provided analysts with the capability to capture, collate, and disseminate intelligence from a variety of proactive and reactive information sources. However, each individual organisation owned its own presence on it while retaining control of their information assets, publishing only what needed publishing.

This can be likened to corporate websites, where users locate specific information and sites through search engines. Corporations allow staff to access the web through gateways and use services provided by others, such as internet banking or social networking sites, which demonstrates controlled access.

Once individuals have found other ‘like minded’ people, they communicate by email, collaborative tools, virtual environments, video conferencing etc. It is not a single system, but a federation of systems working to the same standards.

The US solution has therefore shown us all what can be achieved by adopting NET technology and utilised the intuitive tools that we all already use. The only, but significant, difference is that the network is secured and interconnection policies are strictly controlled. By utilising ‘Commercial Off The Shelf’ (COTS) technologies, (many developed for the finance industry), Secure Managed Interfaces (SMIs) can be built to control the boundary between an organisation and the ‘network’. Each organisation owns its own presence on the network and dictates the level of access its own users have by the security threat mitigation level required to gain accreditation or put more simply, they control their own destiny. The content’s management and usage is controlled by the organisation and achieved through COTS technology.

With the US approach mandated, culture change was a natural evolution. The younger generation of analysts used the system as a social networking tool, posting minimal information to ‘go fishing’ for like-minded individuals who found them using the search engines. As a result, information sharing had been enhanced significantly.

Could this work in the UK? The UK already connects and contributes to the US CT sharing network as described above. Some of our national intelligence systems connect directly, through a UK accredited and secure gateway, to our US, Canadian and Australian allies – proving that the technology works already. The real question therefore is not whether this can be achieved technologically (it already has been), but can we make it work without a decree within current UK policy?

This paper suggests that it is possible and, furthermore, without dismantling established departmental infrastructures or currently operational information systems managed by incumbent industrial partners. In fact, some companies have already connected existing infrastructure to this type of information sharing network.

The Office for Security and Counter Terrorism (OSCT) is currently working alongside the pan industry alliance RISC (UK Security and Resilience Industry Suppliers’ Community) to understand how to provide a clear method of connecting existing national systems, using the US approach, rather than having to replace them all simultaneously.

Currently, many suppliers provide the ‘back office’ capability to the various organisations. But if they all work together, it could create a ‘classified internet’ that allows information that needs sharing to be shared in a timely way that allows action to be taken on it.

A solution such as this will not compromise the raw information; only that which needs publishing to the wider community will get published. As in the US (and in compliance with the new Cabinet Office government framework for information management), each department would own its own information. However, what it also provides is the capability to share information at such a speed that it will enable the security and resilience community to respond appropriately to combat the asymmetric tactics and networks of our enemies.

Such a network would also enable non-traditional security players to have a presence on this ‘classified internet’, including those worried about non-malicious threats such as flooding, pandemics etc. This relates directly to the aspirations of the National Security Strategy to provide a joined up approach to meet the diversity of the identified issues. These issues may require non-obvious solutions; indeed non-obvious players may pick up the threat before traditional security sources.

The NET technologies would make it possible to create connections using very ‘limited’ information release; it would only take a key word, posted on a website with contact details, to make a connection between two analysts. One-to-one they can then pass information in a more controlled manner.

And there is no reason why it should stop at merely sharing information – perhaps usage could be made of virtual world technology, so that the ‘players’ within an interest group could meet and train, developing a community of useful contacts – it is not necessarily what you know, but who you know!

The UK’s adaptation of such an approach does not therefore need a single mammoth procurement where the individual requirements get ‘compromised’ to meet varying organisation-specific requirements. Instead, a central ‘core’ and ‘network’ are required to link the network together. Individual procurements can move at each department’s pace.

As for the definition of the interconnection requirements, industry in the main, understands these because they connect to the ‘web’ already. It is just the way the security enforcing elements – all of which are off the shelf – have to be configured to meet the ‘code of connection’, that slightly complicates the issue. This again refers to configuring commonly-used net capability, not bespoke code.

Conclusion It seems that the US approach to intelligence gathering, based on web-enabled information sharing, offers a viable approach to meeting the UK intelligence requirements of the early 21st Century. It helps:

Improve our response time to match that of our enemies and the security threats they pose, by releasing the power of the information held across government
Enhance investment in current infrastructure and technology by circumventing the need for organisational change, updates to procurement policy or the sensitivities of where information is stored
Empower analysts to do the job with which they are charged, make decisions that help protect the UK and its citizens against current threats, and provide them with ability to meet the challenges of future and, as of yet undefined, threats So what of these undefined threats in the coming years? Can the UK have such a system (sharing multiple information streams and enhancing intelligence) in time for a UK security landmark such as 2012 and provide a capability to develop, practise, and perfect the capability well ahead of 2012? This paper suggests that as an industry, we can – at least in an embryonic but functional way. The truth is, this must be in place by April 2010 anyway. To have a cohesive and complete intelligence platform in place, that answers immediate security questions and addresses future requirements, we must ensure a sound architectural foundation is implemented in the coming months. RISC aims to achieve industry agreement on the way forward in the coming months, with the objective of working towards this common goal for an architectural backbone to be in place to meet the current intelligence challenge and that which we all face for 2012 and beyond. As Karl Fisch reminds us, shift happens and we need to shift now.

Calculating Net Asset Value

Net asset value or NAV is a most common term used to describe the value of a mutual fund. But, many mutual fund investors do not exactly understand the real meaning of the term. However, before investing in a mutual fund, it is very important to know the way in which a mutual fund’s NAV is calculated.

Calculating the net asset value of a mutual fund is really simple. When the current market value of the fund’s net assets on that particular day are divided by the number of outstanding shares, the resultant value is the NAV. The total net assets of a mutual fund include all the securities held by the fund minus any liabilities. Net asset value per share has to be calculated daily. Illustrated below is a simple example to understand the NAV calculation. If the total value of assets in a fund is equal to $5 million and the fund has one million shares in the portfolio, then the NAV or the price per each share is $50.00. Net asset value can also be simply described as the price at which shares may be purchased or sold at a particular time.

According to financial experts, NAV is never a right index to judge the performance of a mutual fund. This is because the NAV is highly influenced by the fund distributions made by the fund manager. As per law, every mutual fund has to distribute its realized net gains to the investors in the form of dividends. Whenever any dividend is distributed, the value of the NAV declines on a per-share basis. In many cases, investors prefer to reinvest the dividends or fund distributions rather than receiving cash. As a result of this option, investors can obtain additional shares using those distributions. And also, there would be no decrease in the total value of the fund investment, even though the NAV declines.

Share Videos On Twitter

Everyone loves to share videos online whether they recorded them on their own or found them on the web. Twitter users especially love to do their sharing via mobile phone. Anyone can easily share videos on Twitter with the help of some great tools that work from their computers or mobile phones.

Without the help of Twitter apps, you can tweet the URL to a video, but this option has its limitations. There are numerous Twitter tools that help you share videos easily and efficiently and listed below are some of our favorites.

UberTwitter is an application created exclusively for BlackBerry users to embed videos within tweets as well as upload photos, update GoggleTalk, and tweet your location. No GPS is required for the location feature.

Twibble is a Twitter tool that works with any smartphone that supports Java. The program connects with Twitpic and Mobypicture so users can easily share videos and pictures. Twibble also includes notifications for new tweets, full screen mode and provides SSL support for security.

TwitLens is another application that you can use to upload videos. The memory limit per video upload is currently 50 MB, and videos or pictures can be posted on Twitter via mobile phone as well as from computers.

Another great Twitter tool for video sharing is Twixxer. Picture and video thumbnails appear embedded on the user’s tweets with a short URL to full-sized versions of the media. YouTube or Viddler videos can be viewed within the Twitter stream for anyone who has the app installed, so viewers do not have to leave your page.

For those who want to create their own videos and share them on Twitter, Screenr is an app that can help. It is an online recorder you can use to create your own videos and share them without downloading anything. Users can create screencasts from their MAC or PC and viewers can watch online from virtually any internet-ready device.

Magnify.net allows users to publish videos that they find online or create themselves. You can create playlists, add comments, change the design, and more. You can integrate the videos on your website and tweet the URL. You can also find relevant videos created by other users to build a video community.

TwitC is a great Twitter tool that enables users to embed or post links to all types of files. With this application you can search, comment, view, rate, mark as favorite, and share files not only on Twitter, but also on Foursquare and Facebook. It works great for video and file sharing and you can link to YouTube, Hulu, Flickr, and more.

Whether you like to share videos on the go from your cell phone or spend time on your computer or iPad sharing videos, there are definitely some great Twitter tools out there that are easy to use and free of charge. With one or two of these seven applications, you will be able to share that funny clip you saw on YouTube or upload your own homemade video for all of your Twitter followers to enjoy.

Net-Centric Air Traffic Management System Explained

Net-centric, in its most common definition, refers to “participation as a part of a continuously evolving, complex community of people, devices, information and services interconnected by a communications network to optimise resource management and provide superior information on events and conditions needed to empower decision makers.” It will be clear from the definition that “net-centric” does not refer to a network as such. It is a term that covers all elements constituting the environment referred to as “net-centric”.

Exchanges between members of the community are based not on cumbersome individual interfaces and point to point connections but a flexible network paradigm that is never a hindrance to the evolution of the net-centric community. Net-centricity promotes a “many-to-many” exchange of data, enabling a multiplicity of users and applications to make use of the same data which in itself extends way beyond the traditional, predefined and package oriented data set while still being standardised sufficiently to ensure global interoperability. The aim of a net-centric system is to make all data visible, available and usable, when needed and where needed, to accelerate and improve the decision making process.

In a net-centric environment, unanticipated but authorised users can find and use data more quickly. The net-centric environment is populated with all data on a “post and share before processing” basis enabling authorised users and applications to access data without wait time for processing, exploitation and dissemination. This approach also enables vendors to develop value added services, tailored to specific needs but still based on the shared data.

In the context of Air Traffic Management (ATM), the data to be provided is that concerning the state (past, present and future) of the ATM Network. Participants in this complex community created by the net-centric concept can make use of a vastly enlarged scope of acceptable data sources and data types (aircraft platforms, airspace user systems, etc.) while their own data also reaches the community on a level never previously achieved.

How are decisions different in a net-centric environment?

Information sharing and the end-user applications it enables is the most beneficial enabler of collaborative decision making. The more complete the information that is being shared and the more thorough its accessibility to the community involved, the higher the benefit potential. In a traditional environment, decisions are often arbitrary and the effects of the decisions are not completely transparent to the partners involved. Information sharing on a limited scale (as is the case in the mainly local information sharing hitherto implemented) results in a substantial improvement in the quality of decisions but this is mainly local and improvements in the overall ATM Network are consequential rather than direct.

If the ATM Network is built using the net-centric approach, decisions are empowered on the basis of information available in the totality of the net-centric environment and interaction among members of the community, irrespective of their role or location, can be based on need rather than feasibility.

Since awareness of the state (past, present or future) of the ATM Network is not limited by lack of involvement of any part as such, finding out the likely or actual consequences of decisions is facilitated, providing an important feed-back loop that further improves the quality of decisions on all levels.

Looking at things from the collaborative decision making (CDM) perspective, it is important to realise that net-centricity is not something created for the sole purpose of making CDM better. Net-centricity is a feature of the complete ATM system design, providing different benefits to different aspects of air traffic management operations. It is when collaboration in decision making exploits also the facilities made possible by the overall net-centric ATM Network, that the superior quality of decisions becomes truly visible.

The concept of services

In traditional system design, information technology (IT) was often driving developments and the functionality being provided in some cases became a limitation on the business it was meant to support. Service orientation is the necessary step to separate the business processes from the IT processes and to enable business considerations to drive the underlying IT requirements. Aligning IT to the business rather than the other way round improves business agility and efficiency.

“Service” in this context is defined as “the delivery of a capability in line with published characteristics, including policies.” This refers to the ATM services required and not the underlying (technical) supporting services and physical assets that need to be deployed. In other words, service refers to the business services and not the information technology services.

Well designed business services must exhibit a number of characteristics that describe the service being offered sufficiently well for the service consumer(s) to clearly understand the service and hence to want to make use them.

On the business level, contracts and service level agreements that put the service in the proper context are very important as they cover not only the function(s) that will be performed but also the non-functional terms and conditions to which the consumer and provider have agreed.

There are several business processes that can be identified in the context of air traffic management. Some are related to the aircraft themselves (e.g. turn-round), others concern the passengers and their baggage. These and all other business processes require specific services to progress and complete in accordance with the business objectives of the process owner. Cleaning and refuelling of the aircraft, passenger check-in, security checking, etc. are just a few examples of the business services that need to be provided in order to achieve the objective, in this case a timely and safe departure.

When viewed on an enterprise level, a given service once defined is often reusable across the enterprise where identical or similar processes are found, resulting in a major potential for cost saving.

The services so defined will then set the requirements for the underlying IT support.

The effects of net-centric integration

The term “integration” is often associated with “centralisation” and the elimination/rationalisation of facilities. While from an economic perspective integration may indeed mean all of the above, net-centric integration is about empowering better decision making through the creation of the complex, networked community of people, devices, information and services that generate benefits to all members of the community without necessarily changing the mapping (nature, number and location) of the community members.

At the same time, net-centric integration enables superior business agility and flexibility so that community members may evolve and change (drop out or new ones come in) in response to the changing needs of the users of the system concerned.

In the net-centric context it is not integration as such that changes the enterprise landscape. Such changes, if any, are the result of the economic imperatives that need to be met and which can now be met based on the improved business agility.

The end-user aspects of net-centric operations

One of the less understood aspects of traditional decision making is that it is not really possible to realise when decisions are based on less then full and/or correct information. The garbage in/garbage out principle applies also to the decision making process. At the same time, the effects of less than good decisions may not be immediately visible. In many cases, poor decisions will affect the efficiency of the overall operation without the negative effects even being traceable to individual decisions. So, while everyone may be doing their very best, the result may still be far short of the quality that would be otherwise achievable.

When the scope and quality of data upon which decisions are based is expanded and improved, the quality of decisions improves almost automatically. The decision makers will notice the expanded possibilities and ultimately the success of the enterprise will also improve in a visible way.

When net-centric operations are introduced, the potential for improvement and the options for achieving the improvement multiply considerably. In the more restricted environment, end-users will have been asking for more information and tools to make using data easier. More often than not, their wish went unfulfilled due to lack of data and/or poor quality and the consequent poor performance of the tools that may have been created. The shared environment under net-centric operations brings all the data anyone may ever wish to have. The services are defined on the basis of the business needs and will also support the tools end-users need to properly interact with net-centric environment, integrating into a coherent whole their individual decision making processes.

In a way a well implemented net-centric system is transparent to the end-users. In particular, they do not need to concern themselves with the location of data they require or the quality thereof. Information management, that is part of the net-centric environment, takes care of finding the information needed and also its quality assurance.

End-user applications are the most visible part of net-centric operations and they can be built to satisfy end-user needs in respect of any process that needs to be handled.

In the ATM context, vastly improved controller decision making tools, safety nets and trajectory calculation are only a few examples of the possible benefits.
The institutional implications of net-centric operations

International air navigation is by definition a highly regulated environment and regulations provide some of the most important pillars of both safety and interoperability. The net-centric and service oriented future ATM environment possesses a number of aspects which by themselves provide powerful drivers for proper regulation. It is important to note that the institutional issues associated with net-centric operations are wider than just CDM and hence early efforts to address the CDM related aspects will benefit the whole of the ATM enterprise. The items of particular relevance are summarised below:

o Wide scope of information contributors – The information needs of the future ATM Network, including the scope of that information, will result in a multitude of new information sources/contributors and/or new types of information being obtained from various information sources.

o Air and ground integration – In the traditional ATM set-up, the coupling between ground and airborne systems are normally very loose or non-existent. Once the net-centric ATM Network is realised and aircraft become nodes on the network, a completely new regulatory-target regime is created in the form of the integrated air/ground ATM elements.

o Information sharing – The value of using shared information is one of the main reasons why System Wide Information Management (SWIM) for the future net-centric ATM environment is being defined. There are however legitimate requirements for protecting some information in one or more of several ways, including de-identification of the source, limiting access, etc.

o Integration of diverse airspace use activities – Airspace is used for various purposes and civil aviation is just one of those. Specific military usage (not all of which involves aircraft operations) as well as various civilian projects and missions employ information that is even more sensitive than the normal business or security sensitive information categories. Their proper protection is essential if the military and other operators generating such sensitive information are to be integrated into the overall ATM process. This aspect poses a specific challenge since not only is the information possibly in a military/State security domain but the regulatory domains may also be nested in different organisations that need to be brought together for and under the SWIM umbrella.

o Disappearance of the difference between voice and data – In the mid- to longer time frames, the expected traffic levels will make the move to almost exclusive use of digital link communications inevitable. This does not mean the disappearance of voice communications on the end-user level. However, a reliable communications system that can serve the voice and data needs of the future ATM environment is by definition digital and hence even voice messages will be transferred via digital means. Hence a convergence of the regulatory regimes for voice and data communications will be inevitable.

o Global interoperability – Aeronautical information has always been global in nature but the strongly limited access and product oriented philosophy has contained the issues of global interoperability. The net-centric approach of the new ATM environment will create large islands of shared information which must however be able to interoperate between each other as well as with legacy environments, constituting a new, global need for proper regulatory regimes.

o Common information pipes for passenger and operational communications – In the traditional analogue environment, aviation has enjoyed dedicated communications means and this tradition was carried over to a certain extent also into the new digital communications technologies. The dedicated “pipe” in air/ground communications is certainly a reality today but the same cannot be said of the ground-ground communications links. The early point to point connections have been replaced in most applications by leased lines which, for substantial segments, are in fact shared with other, often not aviation, users. The drivers behind this change are obviously cost effectiveness considerations. Although early attempts to provide in-flight passenger connectivity have not proved the commercial success many had forecast, it is already visible that in the not too distant future, personal communications needs will evolve to the point where people will demand uninterrupted connectivity even on relatively short flights. Since such demands will always fetch a premium price, it stands to reason that combining the operational and passenger connectivity needs onto a single air/ground pipe could be commercially attractive. While the technology to do this safely will certainly be available, the regulatory aspects will have to be explored in time to ensure that the actual solutions used meet all the safety and other requirements.

o The value of information – Information is a valuable commodity and in the competitive environment of aviation this commodity is of course sought after by many partners, including others than only aircraft operators or airports. The essential safety contribution of information in air traffic management creates an especially complicated web of relationships, some commercial some not, some State obligations some voluntary, and so on that need to be properly regulated with a view to ensuring cost recovery while not discouraging information use.

o Cost effectiveness – Although not always thought of as a driver for regulation, a proper regulatory environment will favour cost-effective, user oriented solutions.

o Training and personnel licensing – The information sharing environment of SWIM will require experts who are conversant not only with the requirements of air traffic management and aircraft operations but also the information technology aspects of the new approach to managing information. This has implications in the construction and approval of training syllabuses, examination fulfilment criteria as well as the qualification requirements. The need for refresher/recurrent training also grows and needs to be part of the overall regulatory regime.

o Standardisation – System wide sharing of information in a net-centric environment requires that the data be the subject of proper standardisation on all levels. This is the key to achieving global interoperability in the technical as well as the service/operational sense. The development and use of the necessary standards can only be realised under a proper regulatory regime.
All the above aspects imply the creation of a regulatory regime that is aligned with the specific needs of a net-centric operation and which is able to regulate for safety and proper performance, including economic performance, appropriate for the new digital environment. Trying to apply traditional methods of regulation without taking the new realities into account is counter productive and must be avoided. This is an important message for both the regulators and the regulated.

The aspects of regulation to be considered include:

o Safety
o Security
o Information interoperability
o Service level interoperability
o Physical interoperability
o Economics
In terms of who should be regulated, thought should be given to at least:

o The State as data provider
o Licensed providers of services, including network services
o Licensed data sources
o Licensed providers of end-user applications
o User credentials and trusted users

It is important to answer also the question: who should be the regulator? This must be agreed in terms of:

o International rules and global oversight
o Licensing rules and global oversight
The types of regulatory activities that need to be put in place concern mainly compliance verification and certification; quality maintenance; and enforcement and penalties.

As mentioned already, the above institutional aspects concern more than just CDM, however, for CDM and in particular information sharing to work in the net-centric environment, they need to be addressed as a prerequisite of implementation.
The technical implications of net-centric operations

On the conceptual level, net-centric operations mean the sharing of superior quality information as part of a community and acting on that information to improve decisions for the benefit of the individual as well as for the network (the networked community). Obviously, this type of operation must be enabled by a proper technical infrastructure.

This technical infrastructure is often thought of as a network with the required band-width and reliability; it is true that the replacement of the one-to-one connections that characterise legacy systems with the many-to-many relationships of the net-centric environment does require a powerful network that fully meets all the quality requirements, but there is much more to net-centricity than this.

The management of the shared data pool, including currency, access rights, quality control, etc. brings in a layer of technical requirements that sit higher than the network as such.

If we then define ‘information’ as ‘data put in context’ it is easy to see that creating the information from the shared data constitutes yet another layer of required technical solutions. These are often referred to as intelligent end-user applications. Tools which end-users can call upon to perform tasks they need for successfully completing their missions. End-users may be pilots, air traffic controllers, flight dispatchers, handling agents or any other person or system with a need for the shared information. In all cases, the end-user applications collect and collate the data needed to create the information required. This then may be a synthetic display of the airport on an EFB, a trajectory on a what-if tool display or a list of arrivals for the taxi company and so on.

End-user applications are scalable to fit, both in functionality and cost, the specific needs of the end-user for whom they are created. This scalability enables the end-user applications to run on different networked devices from simple PDAs through airlines systems to on-board equipment.

It shall be noted that one of the most important characteristics of a net-centric environment that technical solutions must support is that the requirements against equipment are driven by the services/functionality they must provide and NOT by their actual location in the network. As an example, the integrity of the data used to build a trajectory and the quality of the application used to manipulate/interact with the trajectory will depend on the use that will be made of the trajectory and not per se on whether the application is running on the ground or in an aircraft.

This adaptability of the technical solutions to the actual needs (rather than location in the network) leads to important cost saving opportunities.

Net-centricity – the essence of the future

The net-centric approach to system design is not a silver bullet. It is just the environment that enables properly managed information to be exploited to the full and provide the enterprise with the agility it needs to constantly adapt to the changing world for the benefit of the customers and the enterprise itself.

Mutual Fund NAV – Net Asset Value And It’s Use

Mutual funds NAV is defined as the Net Asset Value of the fund. The shares are traded daily at a share price which changes every day. Every mutual has a NAV, or Net Asset Value per share, which is computed every day, and is derived from the closing market price, for that particular day, of the shares and various other securities, which are in its investment portfolio.

Every buy or sell order, for shares of mutual funds, is priced based on the NAV on the day of the trade. The investor will not actually get the trade price at which this transaction took place till the next day.

Mutual funds by definition pay out all of their income and capital gains to their share holders. Because of this, the changes in the NAV of any fund are definitely not the best way to gauge its performance. The true performance may be best measured by the annual total return.

Closed end mutuals and ETFs are traded in the same way as stocks on the stock market. As a result of this the shares of these trade at the market value, which can sometimes be above (ie trading at a premium) or sometimes below ( ie trading at a discount) the actual Net Asset Value or NAV of the fund which is being traded.

Exchange traded funds (ETFs) are traded on the stock market daily, just like any stock is traded. The ETFs value per share is also commonly known as its NAV or Net Asset Value per share.

To summarise this the dollar value per share of any mutual fund is computed by dividing the total value of all securities, which are in the portfolio of the fund, minus any liabilities it may have, by the number of shares outstanding at the time of calculation.

Internet Protocol Version Four

Internet Protocol:- Communication between hosts can happen only if they can identify each other on the network. In a single collision domain (where every packet sent on the segment by one host is heard by every other host) hosts can communicate directly via MAC address.MAC address is a factory coded 48-bits hardware address which can also uniquely identify a host. But if a host wants to communicate with a remote host, i.e. not in the same segment or logically not connected, then some means of addressing is required to identify the remote host uniquely. A logical address is given to all hosts connected to the Internet and this logical address is called Internet Protocol Address.

The network layer is responsible for carrying data from one host to another. It provides means to allocate logical addresses to hosts, and identify them uniquely using the same. Network layer takes data units from Transport Layer and cuts them in to smaller unit called Data Packet.

Network layer defines the data path, the packets should follow to reach the destination. Routers work on this layer and provides mechanism to route data to its destination. A majority of the internet uses a protocol suite called the Internet Protocol Suite also known as the TCP/IP protocol suite. This suite is a combination of protocols which encompasses a number of different protocols for different purpose and need. Because the two major protocols in this suites are TCP (Transmission Control Protocol) and IP (Internet Protocol), this is commonly termed as TCP/IP Protocol suite. This protocol suite has its own reference model which it follows over the internet. In contrast with the OSI model, this model of protocols contains less layers.

Internet Protocol Version 4 (IPv4)

Internet Protocol is one of the major protocols in the TCP/IP protocols suite. This protocol works at the network layer of the OSI model and at the Internet layer of the TCP/IP model. Thus this protocol has the responsibility of identifying hosts based upon their logical addresses and to route data among them over the underlying network.

IP provides a mechanism to uniquely identify hosts by an IP scheme. IP uses best effort delivery, i.e. it does not guarantee that packets would be delivered to the destined host, but it will do its best to reach the destination. Internet Protocol version 4 uses 32-bit logical address.

Internet Protocol being a layer-3 protocol (OSI) takes data Segments from layer-4 (Transport) and divides it into packets. IP packet encapsulates data unit received from above layer and add to its own header information.

The encapsulated data is referred to as IP Payload. IP header contains all the necessary information to deliver the packet at the other end.

IP header includes many relevant information including Version Number, which, in this context, is 4. Other details are as follows:

  • Version: Version no. of Internet Protocol used (e.g. IPv4).
  • IHL: Internet Header Length; Length of entire IP header.
  • DSCP: Differentiated Services Code Point; this is Type of Service.
  • ECN: Explicit Congestion Notification; It carries information about the congestion seen in the route.
  • Total Length: Length of entire IP Packet (including IP header and IP Payload).
  • Identification: If IP packet is fragmented during the transmission, all the fragments contain same identification number. to identify original IP packet they belong to.
  • Flags: As required by the network resources, if IP Packet is too large to handle, these ‘flags’ tells if they can be fragmented or not. In this 3-bit flag, the MSB is always set to ‘0’.
  • Fragment Offset: This offset tells the exact position of the fragment in the original IP Packet.
  • Time to Live: To avoid looping in the network, every packet is sent with some TTL value set, which tells the network how many routers (hops) this packet can cross. At each hop, its value is decremented by one and when the value reaches zero, the packet is discarded.
  • Protocol: Tells the Network layer at the destination host, to which Protocol this packet belongs to, i.e. the next level Protocol. For example protocol number of ICMP is 1, TCP is 6 and UDP is 17.
  • Header Checksum: This field is used to keep checksum value of entire header which is then used to check if the packet is received error-free.
  • Source Address: 32-bit address of the Sender (or source) of the packet.
  • Destination Address: 32-bit address of the Receiver (or destination) of the packet.
  • Options: This is optional field, which is used if the value of IHL is greater than 5. These options may contain values for options such as Security, Record Route, Time Stamp, etc.

Internet Protocol hierarchy contains several classes of IP to be used efficiently in various situations as per the requirement of hosts per network. Broadly, the IPv4 system is divided into five classes of IP Addresses. All the five classes are identified by the first octet of IP.

Internet Corporation for Assigned Names and Numbers is responsible for assigning IP.

The first octet referred here is the left most of all. The octets numbered as follows depicting dotted decimal notation of IP:

The number of networks and the number of hosts per class can be derived by this formula:

When calculating hosts’ IP, 2 IP are decreased because they cannot be assigned to hosts, i.e. the first IP of a network is network number and the last IP is reserved for Broadcast IP.

Class A Address

The first bit of the first octet is always set to 0 (zero). Thus the first octet ranges from 1 – 127, i.e.

Class A addresses only include IP starting from 1.x.x.x to 126.x.x.x only. The IP range 127.x.x.x is reserved for loopback IP addresses.

The default subnet mask for Class A IP address is 255.0.0.0 which implies that Class A addressing can have 126 networks (27-2) and 16777214 hosts (224-2).

Class A IP address format is thus: 0NNNNNNN.HHHHHHHH.HHHHHHHH.HHHHHHHH

Class B Address

An IP address which belongs to class B has the first two bits in the first octet set to 10, i.e.

Class B IP range from 128.0.x.x to 191.255.x.x. The default subnet mask for Class B is 255.255.x.x.

Class B has 16384 (214) Network addresses and 65534 (216-2) Host addresses.

Class B IP format is: 10NNNNNN.NNNNNNNN.HHHHHHHH.HHHHHHHH

Class C Address

The first octet of Class C IP address has its first 3 bits set to 110, that is:

Class C IP range from 192.0.0.x to 223.255.255.x. The default subnet mask for Class C is 255.255.255.x.

Class C gives 2097152 (221) Network addresses and 254 (28-2) Host addresses.

Class C IP address format is: 110NNNNN.NNNNNNNN.NNNNNNNN.HHHHHHHH

Class D Address

Very first four bits of the first octet in Class D IP addresses are set to 1110, giving a range of:

Class D has IP rage from 224.0.0.0 to 239.255.255.255. Class D is reserved for Multicasting. In multicasting data is not destined for a particular host, that is why there is no need to extract host address from the IP address, and Class D does not have any subnet mask.

Class E Address

This IP Class is reserved for experimental purposes only for R&D or Study. IP addresses in this class ranges from 240.0.0.0 to 255.255.255.254. Like Class D, this class too is not equipped with any subnet mask.

Each IP class is equipped with its own default subnet mask which bounds that IP class to have prefixed number of Networks and prefixed number of Hosts per network. Classful IP does not provide any flexibility of having less number of Hosts per Network or more Networks per IP Class.

CIDR or Classless Inter Domain Routing provides the flexibility of borrowing bits of Host part of the IP and using them as Network in Network, called Subnet. By using subnetting, one single Class A IP address can be used to have smaller sub-networks which provides better network management capabilities.

Class A Subnets

In Class A, only the first octet is used as Network identifier and rest of three octets are used to be assigned to Hosts (i.e. 16777214 Hosts per Network). To make more subnet in Class A, bits from Host part are borrowed and the subnet mask is changed accordingly.

For example, if one MSB (Most Significant Bit) is borrowed from host bits of second octet and added to Network address, it creates two Subnets (21=2) with (223-2) 8388606 Hosts per Subnet.

The Subnet mask is changed accordingly to reflect subnetting. Given below is a list of all possible combination of Class A subnets:

In case of subnetting too, the very first and last IP of every subnet is used for Subnet Number and Subnet Broadcast IP respectively. Because these two IP addresses cannot be assigned to hosts, sub-netting cannot be implemented by using more than 30 bits as Network Bits, which provides less than two hosts per subnet.

Class B Subnets

By default, using Classful Networking, 14 bits are used as Network bits providing (214) 16384 Networks and (216-2) 65534 Hosts. Class B IP Addresses can be subnetted the same way as Class A addresses, by borrowing bits from Host bits. Below is given all possible combination of Class B subnetting:

Class C Subnets

Class C IP addresses are normally assigned to a very small size network because it can only have 254 hosts in a network. Given below is a list of all possible combination of subnetted Class B IP address:

Internet Service Providers may face a situation where they need to allocate IP subnets of different sizes as per the requirement of customer. One customer may ask Class C subnet of 3 IP addresses and another may ask for 10 IPs. For an ISP, it is not feasible to divide the IP addresses into fixed size subnets, rather he may want to subnet the subnets in such a way which results in minimum wastage of IP addresses.

For example, an administrator have 192.168.1.0/24 network. The suffix /24 (pronounced as “slash 24”) tells the number of bits used for network address. In this example, the administrator has three different departments with different number of hosts. Sales department has 100 computers, Purchase department has 50 computers, Accounts has 25 computers and Management has 5 computers. In CIDR, the subnets are of fixed size. Using the same methodology the administrator cannot fulfill all the requirements of the network.

The following procedure shows how VLSM can be used in order to allocate department-wise IP addresses as mentioned in the example.

Step – 1

Make a list of Subnets possible.

Step – 2

Sort the requirements of IPs in descending order (Highest to Lowest).
• Sales 100
• Purchase 50
• Accounts 25
• Management 5

Step – 3

Allocate the highest range of IPs to the highest requirement, so let’s assign 192.168.1.0 /25 (255.255.255.128) to the Sales department. This IP subnet with Network number 192.168.1.0 has 126 valid Host IP which satisfy the requirement of the Sales department. The subnet mask used for this subnet has 10000000 as the last octet.

Step – 4

Allocate the next highest range, so let’s assign 192.168.1.128 /26 (255.255.255.192) to the Purchase department. This IP subnet with Network number 192.168.1.128 has 62 valid Host IP Addresses which can be easily assigned to all the PCs of the Purchase department. The subnet mask used has 11000000 in the last octet.

Step – 5

Allocate the next highest range, i.e. Accounts. The requirement of 25 IPs can be fulfilled with 192.168.1.192 /27 (255.255.255.224) IP subnet, which contains 30 valid host IPs. The network number of Accounts department will be 192.168.1.192. The last octet of subnet mask is 11100000.

Step – 6

Allocate the next highest range to Management. The Management department contains only 5 computers. The subnet 192.168.1.224 /29 with the Mask 255.255.255.248 has exactly 6 valid host IP. So this can be assigned to Management. The last octet of the subnet mask will contain 11111000.

By using VLSM, the administrator can subnet the IP subnet in such a way that least number of IP are wasted. Even after assigning IPs to every department, the administrator, in this example, is still left with plenty of IP which was not possible if he has used CIDR.

There are a few reserved IPv4 address spaces which cannot be used on the internet. These addresses serve special purpose and cannot be routed outside the Local Area Network.

Private IP

Every class of IP, (A, B & C) has some addresses reserved as Private IP addresses. These IPs can be used within a network, campus, company and are private to it. These addresses cannot be routed on the Internet, so packets containing these private addresses are dropped by the Routers.

In order to communicate with the outside world, these IP addresses must have to be translated to some public IP using NAT process, or Web Proxy server can be used.

The sole purpose to create a separate range of private addresses is to control assignment of already-limited IPv4 address pool. By using a private address range within LAN, the requirement of IPv4 addresses has globally decreased significantly. It has also helped delaying the IPv4 address exhaustion.

IP class, while using private address range, can be chosen as per the size and requirement of the organization. Larger organizations may choose class A private IP address range where smaller organizations may opt for class C. These IP addresses can be further sub-netted and assigned to departments within an organization.

Loopback IP

The IP range 127.0.0.0 – 127.255.255.255 is reserved for loopback, i.e. a Host’s self-address, also known as localhost address. This loopback IP is managed entirely by and within the operating system. Loopback addresses, enable the Server and Client processes on a single system to communicate with each other. When a process creates a packet with destination address as loopback address, the operating system loops it back to itself without having any interference of NIC.

Data sent on loopback is forwarded by the operating system to a virtual network interface within operating system. This address is mostly used for testing purposes like client-server architecture on a single machine. Other than that, if a host machine can successfully ping 127.0.0.1 or any IP from loopback range, implies that the TCP/IP software stack on the machine is successfully loaded and working.

Link-local Addresses

In case a host is not able to acquire an IP from the DHCP server and it has not been assigned any IP manually, the host can assign itself an IP address from a range of reserved Link-local addresses. Link local address ranges from 169.254.0.0 — 169.254.255.255.

Assume a network segment where all systems are configured to acquire IP from a DHCP server connected to the same network segment. If the DHCP server is not available, no host on the segment will be able to communicate to any other. Windows (98 or later), and Mac OS (8.0 or later) supports this functionality of self-configuration of Link-local IP. In absence of DHCP server, every host machine randomly chooses an IP from the above mentioned range and then checks to ascertain by means of ARP, if some other host also has not configured itself with the same IP. Once all hosts are using link local addresses of same range, they can communicate with each other.

The Ways to Improve Fundraising

Donor management is never easy no matter how easy it may seem; there is always some information that you will need to get and some that you will need to impart in order to learn some of the best ways of it.

In the many things about donor management the one that counts to be the most important of all is improving fundraising and getting into the skills of it all for the best results. Here in this article are the ways you need to follow and imbibe to improve the fundraising program for your nonprofit or church.

1. Be Transparent with Your Donors – Though this might seem like an obvious point, it is usually the one that is most ignored, and the most important of all. What is important about transparency is your donors being able to trust you with all – from your plans to our ideas and ideologies. Also, this is an important point to note because only when they trust you will they be able to steward their money well and you must be able to show them you are doing so. By ‘transparent’ we mean both financial and program transparency.

Financial Transparency: You might not be considering financial transparency to be an important point but this should definitely be on your list of important things. It is considered important to release a note time to time which would show how you are allocating your funds, but your donors are not going to sit and read through that long document. Make sure you give your donors an easy way to digest how you are investing their money. Create a graph, chart, infographic etc. And if it looks like you spent more in say, fundraising, than expected, explain why. Your donors love your mission and giving them a peek behind the curtain creates a sense of belonging and teamwork.
Program Transparency: Program transparency is all about the IMPACT. If you can show your donors the impact their money has made in changing the lives of those you’re serving or where the money has impacted, you can be sure you’ve done your thing right. Create annual reports showing the graphs of how far you’ve come with the support, meanwhile mentioning exactly where you want more changes and where you’re striving to achieve more.

2. Optimize our Donor Experience – Your donors shouldn’t be there for just one years or only a period of time, and that is possible only if you manage to optimize the donor experience convincing them that there are things that’ll help you stay in contact for more than one donation period. Try personalization (which definitely does no longer mean just hey and the first name); it is always recommended to stay in touch with the donors through emails, letters and phone calls. You can segment based on last gift amount, last gift date, a specific campaign – anything. And then create fundraising messaging around each category.

3. Audit Your Systems – Thought this is not important? Wrong!!! One of the most important points to be considered to improve fundraising is to audit your system – audit on your end – use the right set of tool and the right techniques. Keep the audit impartial and keep it clear – this will help you understanding how far you’ve come with your fundraising program and exactly how far you will be able to go with it.

Fundraising for churches, charities and non profits is the thing that does the most benefit and an increase in the finds over a stipulated period of time is exactly what they’re striving for. It is therefore recommended that you use these set of tips mentioned above and create a draft accordingly of you new ideas and plans for an increase in the funds.

Quality Furniture

Events are synonymous with real-world marketing for businesses across industry verticals. They are being leveraged by brands small and big alike to reach out to customers and convey their messages. A growing number of firms trust events to build their base, expand their horizon and tap into the potential in the market. Quite clearly, a lot is at stake when a business decides to host an event and penetrate the market deep. Right from launching a new product or service to enhancing the goodwill of the existing resources, businesses know where to turn to in the need of hours.

With so much benefit to take, it’s natural to expect your event to be successful so that all goals are realized with ease. For that, an expert agency will be needed familiar with every aspect of event, including from planning to strategics to hosting to customer servicing. The job at hand is not that easy for the agency as well as it has to take care of a lot of aspects to ensure success for the planned event. Among other things, it has to make sure that the brand messages are conveyed in the way they should be.

Further, a good event is one that seamlessly merges the concepts of aesthetics and functionality in true sense. And this is where furniture do have a role to play as they often add a great deal of value to any occasion. Their market is stuffed with inventive, high quality event furniture which can really make a big impression on your product launch party or customer get-together occasion. Right from chairs, sofa seating, stools, benches, poseur tables, dining tables, bars and plinths, your event can benefit from a wide range of furniture and surely stand out from the crowd.

Further, event planners know how to place furniture at right places and positions to have maximum impact out of them. They add a creative approach to occasion and make guests and potential customers impressed in true sense. It’s also a cost-effective approach to not buy all of those furniture and rather hire them and add great value to the whole affair. Whether classic touch or contemporary styles, you can select what suits your interests and events the best and win maximum attention out there. After all, the purpose is to create a vibrant atmosphere and let the brand benefit in more ways than one.

In overall, furniture hire is a very helpful and innovative concept to benefit from as make the event a big success. It has the potential to enrich your business even without asking for resources to be put at used. So, your business can benefit from furniture by adding value to the event for brand building efforts. For that, you have to find a right agency with years of experience in the domain and you will also need a company that rents out furniture. This is how your event can become as successful and impact as you’d expect it to be in a real sense.

Office Design for Improving Productivity

Sometimes, adding chalkboards and whiteboards can seem handy, but there is more than you can do to improve your office space. Here are just a few office design tips to help improve your overall productivity.

1. Idea Storage

One of the worst things that can happen for creative people is that they have a great idea but do not have anywhere to write it down, and they lose it. There is also the chance that you will end up doing a huge amount of research on a topic that you are not going to use. Whiteboards and notebooks are a great option for writing your ideas down, so you can continue to work on your main task for the day.

2. Remove the Clutter

It is important that you are regularly cleaning your office. Clutter comes from your creative mind working, but it can make focusing and getting your work done difficult. You should make sure that you have enough storage for all your items and that you have access to your most used objects.

3. Bring in Some Nature

We are biological creatures, so we should be spending some amount of time outside every single day. However, being inside all the time has a huge effect on our work. While it would be nice to spend a lot of time outside, for most jobs, this is not really possible. If you cannot take your work outside, why not bring nature to you? Try opening the shades and letting fresh air. This could help you feel more energized and help you get more done. Plants can also be a great option to add to your office, you just have to remember to water it.

4. Table and Chairs

We have all experienced having to sit at a table and having to consistently having to readjust to be comfortable, so we could focus on our work. This is why you should take the time to find a desk and chair that both fits your body and the way that you sit. This can take some adjusting to if you are working on an office where you do not have control over when items are ordered. If you are working at home, try to sit in chairs that you are thinking about buying for around 30 minutes to find out if they are comfortable for you.