Thursday, March 31, 2016

Fix 'Failed To Fetch' Error With Google Chrome On Ubuntu And It's Derivatives

Fix 'Failed To Fetch' Error With Google Chrome On Ubuntu And It's Derivatives


'Failed To Fetch' Error With Google Chrome

Recently, while updating your Ubuntu or Ubuntu based systems, you might have come across an update error that says:

Fetched 65.9 kB in 5min 48s (188 B/s) W: Failed to fetch http://dl.google.com/linux/chrome/deb/dists/stable/Release Unable to find expected entry 'main/binary-i386/Packages' in Release file (Wrong sources.list entry or malformed file) E: Some index files failed to download. They have been ignored, or old ones used instead.

error while updating google chrome on ubuntu 14.04

Don't feel alone. This is a common issue faced by many after Google Chrome has ended support for 32 bit Linux system. We already know about it. This change was supposed to impact only 32 bit Linux systems but it seems that even users running Ubuntu 14.04 or higher on a 64 bit systems are also affected by this change.
While this error is tiny, trivial, it is highly annoying. Good news is that it is really easy to get rid of this pain-in-neck error.

Fix Failed to fetch issue :

The problem here is that in the sources list entry, by default, it is expecting the 32 bit package. This sound weird but it's true.What we need to do here is to change the sources.list entry for Google Chrome and explicitly instruct it to get

What we need to do here is to change the sources.list entry for Google Chrome and explicitly instruct it to get 64 bit package. How to modify the sources.list, you may ask.

Well, it's quite easy to do that. It can be done entirely in command line itself, but keeping beginners in mind, I'll show you how to do it with a graphical text editor.


step 1

Open a terminal and use the command below:

sudo gedit /etc/apt/sources.list.d/google-chrome.list

This will open Gedit text editor and you'll see a file like this:



Step 2:

In here, all we need to do is to replace
" deb http://dl.google.com/linux/chrome/deb/ stable main " with the following line:

deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main

It should look like this now:



Step 3:

Save and close the file. Now run the update again using the command below:

sudo apt-get update

That's it.. You won't see the error again. Enjoy error-free updates now



It's open source ....
     Love It ...!
          Share It...!
                  &
                    Enjoy It Guys...!

Tuesday, March 29, 2016

6 reasons people with disabilities should use Linux

6 reasons people with disabilities should use Linux


6 reasons people with disabilities should use Linux

Often, when issues of accessibility and assistive technology are brought up among people with disabilities, the topics center around the usual issues: How can I afford this device? Is it available for me? Will it meet my needs? How will I receive support?
Open source solutions, including any Linux-based operating system, are rarely, if ever, considered. The problem isn't with the solution; instead, it is a result of lack of information and awareness of FOSS and GNU/Linux in the disability community, and even among people in general. Here are six solid reasons people with disabilities should consider using Linux:

Customization and modification

Assistive technology has come a long way from its not-so-distant past; however, proprietary devices are limited in their ability to conform and adapt to their users. Few mainstream solutions are available, and even fewer are unlocked, able to be modified at the lower levels. Being able to take existing technology and adapt it to suit one's needs—rather than forcing a person to adapt to the device and/or software—is a strength of open source software and Linux, and is extremely important for those who rely on a device to accomplish what others take for granted every day.

A few years ago, for example, I worked on a grant project in my hometown. Part of the project was to loan netbooks to students with disabilities who showed significant leadership potential. One of the students really enjoyed using the netbook's webcam to take pictures; however, the operating system loaded on the device used text, not images, to differentiate between files and folders. Without being able to read the path to the folder where the webcam saved images, they were not able to find them.

After some discussion, I switched out the operating system for Ubuntu Netbook Remix, which had an easy-to-use GUI and, more importantly, an icon set that used symbols to identify what was contained in the folders: a filmstrip for videos, a music note for sound files, a letter for documents, and a square photo (like a Polaroid) for images. That's all it took—a simple change in icons and the barrier that prevented full use was eliminated.

Stability, reliability, and durability

Whether you rely on a text-to-speech program to communicate with others, a device that assists those who are blind in navigation, a speech-to-text application that assists in typing and input, or something else essential to your daily life, the thing you rely on can't be fragile nor easily broken. A stable platform that can survive extended durations of uptime without freezing, locking up, or crashing is a must. The same kernel that's used to power the world's servers is an obvious choice to keep someone's accessible device running when it's needed most.

Compatibility with obsolete or old hardware

Proprietary assistive technology devices—especially when addressing more severe disabilities—often run on older, dated hardware. Even if one is able to obtain a current version of the software they need, that doesn't always mean the hardware they own will be able to run it. Through Linux, however, an aging device can be rejuvenated, and the person with a disability won't have to constantly upgrade their hardware. This reduces cost, both in time to learn and adapt to new hardware and in monetary expense.

Control and full ownership

For future assistive technology devices to be fully accessible, the software and the device in use must be changeable and adapt to the individual, instead of forcing a change in the individual's ability to adapt to an able world. By having access to the code, people with disabilities are able to inspect and ensure that the software they are using is under their control and working for them. This access also reduces problems with privacy and security, which is doubly important when the device one relies upon handles nearly all of your sensitive data. Without ownership and full control, any benefit an assistive technology device provides is constrained and leveraged against the company that produced it, unfulfilling the assumed purpose of its programming. All of us want the hardware and software we paid for to work for us based on our demands and needs, and people with disabilities are no different.

Assistance from a large, international community

Many of us know the pain of trying to obtain assistance for a proprietary device or program, waiting on the phone and getting only limited help. This becomes even worse when trying to solve an issue with an assistive technology device; support is limited, there are often few or no brick-and-mortar retailers that will replace your device, and due to its unique, locked-in configuration, there are few local individuals who can troubleshoot and solve such problems. When you use Linux, the entire Internet is your resource. Forums, IRC/chat rooms, online videos and tutorials, and more options are all available to guide anyone—from the most novice of beginners to experience veterans of the sysadmin world—through nearly any difficulty. One notable advantage here is that when someone posts a question or describes a recent headache on the web, countless other people learn about it, and some of them may be asking the same question.

Fun

The fact is, Linux is fun. The thrill of shaping, molding, and customizing a system to individual needs is profound. Showing others what you've put together is just part of it; showing how you did it and how they can too is an integral part of the open-source community. Who wouldn't want to be included in that?

SOURCE : Opensource.com
Creative Commons License


5 things that every open source project manager should consider

5 things that every open source
project manager should consider


open source

In a 2013 survey, 11% of contributors to free and open source software identified as women. But perhaps the future looks brighter? We can answer this question by examining the participation of women in Google Summer of Code, a program that provides a stipend for post-secondary school students to contribute to open source software for a summer. From 2011 to 2015, the program consisted of about 7-10% female participants. This is an extremely low percentage and does not bode well for the future of diversity in open source.

More recently, a study on gender bias in open source was published through a joint effort between North Carolina State University and the California Polytechnical Institute at San Luis Obispo. Although the study has been widely distributed and analyzed by several media outlets, I will caveat my own analysis with the statement that the study has yet to undergo peer review. Furthermore, the findings in the study only apply to about 35% of the GitHub user base, which is not representative of the entire open source community by any respects. That being said, the study does conclude that for this particular subset of users, women do tend to have their pull requests accepted more frequently than men. What does this mean? We can conclude that although women don't make up the majority of open source contributors, their contributions are valued at an equal or greater extent to those of men.

It appears that women are more-than-capable developers but make up a small portion of the open source community. As it appears, the participation of women in open source doesn't look like it's going to increase. How can we change this? By enforcing codes of conduct, effectively moderating communications on pull requests and issues, creating a healthy and welcoming environment for new contributors, building non-digital spaces, and practicing empathy, open source projects can create communities that are diverse and inclusive.

Codes of conduct

A code of conduct is an important part of any open source software project. If you are a corporation looking to take an internal project public, be sure you have a code of conduct in place before doing so. If you are a solo developer getting ready to push the first public commit of a personal project, be sure you have a code of conduct in place before doing so. There should be no hesitation or question about whether or not a code of conduct should be included in a codebase. We have codes of conduct governing our states, our countries, and our global society, so why wouldn't we have codes of conduct present in our software? Codes of conduct solidify the expectations and objectives of a project. Believe it or not, although you might have expectations about what kind of behavior is and isn't acceptable in the community around a project, some people do not. Codes of conduct help set a baseline for the global dialogue around behavior in the digital world.

There are plenty of codes of conduct that you can use as part of your project. I personally recommend using the Contributor Covenant. The language of the document is simple, concise, easy to understand, and already contains several translations. Furthermore, it is currently used by quite reputable open source projects such as Atom and Ruby on Rails.

Dialogue on PRs and issues

Pull requests and issues can be some of the most tense environments in an open source community, and rightly so. In them, people expose their hastily written code and half-baked feature requests in addition to their well-written code and bug-reports. The exposed nature of pull requests and issues leaves new and old contributors alike nervous about their place in the project. Thus, it's important to maintain a civil, empathetic, and respectful tone when communicating with other developers in an online space. This might seem like not much to ask, but you'd be surprised how much damage can be done with an emotional mindset and a readily accessible Enter key.

Tone is difficult to discern in a digital space, and we often end up applying our current emotions and context to the things we read online. If you feel immediately annoyed or insulted by a request or a comment, read it again at another time. If you still feel ambivalent about the tone of the comment, ask for clarification. Never—and I repeat, never—apply a tone or intent to another person's comment that is fueled by your personal state of mind. This simple misunderstanding starts a lot of miscommunications and problems, and can be easily avoided by giving the person that you are communicating with the benefit of the doubt.

That being said, disagreements do happen quite often. When they do get heated or emotional, it's important for project maintainers to uphold the following rules in each discussions:

  • Critique the ideas, not the ideator or his/her abilities.
  • Include an unbiased moderator in all highly opinionated discussions.
  • Enforce the project's code of conduct appropriately.

You'll notice that I didn't mention anything about marginalized individuals in this section. This is because negative interactions on pull requests and issues affect all contributors negatively, but contributors from marginalized backgrounds even more so. When you have to be constantly vigilant about how your tone and ideas are presented in a space, someone receiving your harmless remark or less-than-perfect pull request in a negative manner can have detrimental effects.

The value of new contributors

New contributors are the lifeblood of any open source project. While some people might be tempted to exalt core developers and maintainers, it's actually new contributors that maintain the spirit of the project.
New contributors prevent an open source development team from becoming cliquey. New contributors introduce interesting new ideas to a project. New contributors present ideas that are opposite the status quo. As a result, effective engagement with new contributors should be at the forefront of any open source project.

After writing open source software for several years, you might have forgotten what it feels like to fork a repository for the first time, to nervously tap away at your keyboard writing code that you think is bad, to hover for hours over the Pull Request button wondering how your contribution will be received. Being a new contributor is an emotionally and mentally exhausting process, but there are ways to make it less so for new contributors.

First and foremost, ensuring that you have nicely written contribution guidelines is important. Whether it's a simple Markdown or plain text file, ensure that your guidelines contain complete information on a development setup, the test-driven development process, any style guidelines you use, and any procedures you have around submitting pull requests. Don't be afraid to reproduce information that exists elsewhere. It's perfectly reasonable to provide information about forking and cloning, creating branches, and committing changes. Although it puts a lot of overhead on you as the writer of the guidelines, it provides the new contributor a single, authoritative source for getting involved with a project.

In addition to including thorough contribution guidelines, consider including a screencast that visually takes new contributors through the contribution workflow. If a screencast is too much effort, consider making a quick infographic that describes the workflow. For an example, check out the contribution workflow infographic I made for Project Jupyter. Different people learn in different ways and providing different ways for new contributors to learn about getting involved with your project lets them know that you are aware and mindful of their unique perspective.

However, before new contributors can even begin to engage with a project, they have to know what they can do. Tagging issues in a project with information about the type of contribution required to complete the issue (documentation, test case, feature, bug fix, etc.), the difficulty of a particular issue (low, medium, high), and the priority level can go a long way toward helping new contributors find the perfect issue for their first pull request. Tagging the type and priority of an issue can relatively easy, but tagging the difficulty does require a bit more nuance. When tagging the difficulty of an issue, I think it is important to consider the following questions:

  1. Does addressing this issue require changes across multiple files?
  2. Does addressing this issue require special knowledge of a particular topic (threading, low-level network communication, etc.)?
  3. Does addressing this issue involve interacting with an undocumented or untested portion of the codebase?

Answering these questions not only helps you provide more information to new contributors, it helps you assess the quality of your own codebase and work toward improvements.

The importance of non-digital spaces

In addition to digital forums, it's also important to have opportunities to form real-world connections around an open source project. These connections don't have to happen at a conference or a meetup organized around the project—they can happen anywhere. There is a certain level of respect and camaraderie that can be achieved when individuals engage with each other in the real world. Members of the community can host pop-up events at local coffee shops or co-working spaces that include collaborative coding, casual conversation, and knowledge sharing. Whether we like it or not, people have different digital and real-world personas, and allowing a community to develop around your project outside the Internet gives potential contributors with strong social skills the opportunity to engage with the project.

Enhancing empathy

In an increasingly connected world, writing software with people across the country, and indeed across the world, is expected in a work environment. The worldwide interactions are even more common in open source, where anyone with access to the Internet and a text editor can learn about and contribute to open source project. This is bound to cause a lot of tensions as we attempt to engage in technical and non-technical discourse with people from countries we might not even be able to identify on the map. How do we effectively traverse the multicultural ecosystem of open source software? It involves something that you've probably heard time and time again but might have trouble getting a full grasp of: engineering empathy. I believe that empathy is a muscle that you can strengthen. Namely, the following behaviors can go a long way towards improving your empathetic skills:

  1. Read technical blog posts written by developers from different cultures.
  2. Watch technical talks from conferences held in countries different from the one you reside in.
  3. If you have some fluency in another language, try reading the news in that language. This will give you perspective into what it feels like to be a non-native English speaker in open source.

These techniques will help you discover how developers from different backgrounds think, write, speak, and share. Over time, you'll develop a sense of perspective and appreciation for the way that different people approach software development.

Final thoughts

You'll notice that the above tips I described don't specifically target marginalized individuals. When you create a space that is welcoming and receptive of marginalized individuals, you create a space that is welcoming and receptive of everyone.



SOURCE:opensource.com


Creative Commons License

Sunday, March 27, 2016

How To Move Unity Launcher To The Bottom Of The Screen (In Ubuntu 16.04)

How To Move Unity Launcher To The Bottom Of The Screen
( In Ubuntu 16.04 )


Ubuntu 16.04 with launcher on bottom of the screen
Screen Shot : Ubuntu 16.04 launcher moved to bottom of the screen

Finally the default Unity Desktop’s left launcher panel can be moved to the bottom of screen in Ubuntu 16.04 LTS Xenial Xerus.
Ubuntu 16.04 has reached its final beta. One of the great new features is that the left launcher panel now is movable: to Left or to Bottom.

However, the 'option' to change position is buried deep in the geeky depths of dconf-editor it is the one change that everyone is talking about.

At present, you can change Unity Launcher position in following two ways:

Method 1: Command line way

For those prefer Linux commands, this can be done via a single command in terminal:

Open a terminal (Ctrl+Alt+T) and use the following commands accordingly:

Move the Unity Launcher to bottom:

gsettings set com.canonical.Unity.Launcher launcher-position Bottom

Move the Unity Launcher back to left:

gsettings set com.canonical.Unity.Launcher launcher-position Left

For graphical way, do:

1.You need dconf-editor to do this in GUI way.. so install dconf-editor with following command .

sudo apt-get install dconf-editor

2. Launch dconf editor after installation, and navigate to "com -> canonical -> unity -> launcher". Finally change the value of “launcher-position” to select Unity Launcher position.

To make the bottom panel fit your screen, go to System Settings -> Appearance and change the value of Launcher icon size.

The change takes into effect immediately. No need for a restart or even closing the dconf editor.

What do you think of this new feature in Ubuntu 16.04?
See What's New in Ubuntu 16.04


Video : Moving Unity Launcher in Ubuntu 16.04



What's New In Ubuntu 16.04

What's New In Ubuntu 16.04


Ubuntu 16.04 with launcher on bottom of the screen
Screen Shot : Ubuntu 16.04 launcher moved to bottom of the screen

Ubuntu 16.04 ( Xenial Xerus ), the next major/LTS(Long Term Release) of Ubuntu, is set for its stable release on Thursday April 21, 2016.
And here, What's new in the next major version of world's most linux distribution.
share it , with fellow linux fans and enjoy !!

New features in 16.04 LTS

Though we already know that this new release is not bringing a big UI changes to Unity Desktop.. It's worth to mention.. that ... Ubuntu 16.04 finally lets you move the Unity launcher to the bottom of the screen - nearly six long years after user's first asked. And you have the Ubuntu Kylin team to thank for it for it is they who did the heavy work of hacking Unity to be moveable.

Ubuntu 16.04 finally lets you move the Unity launcher to the bottom of the screen

While the 'option' to change position is buried deep in the geeky depths of dconf-editor it is the one change that everyone is talking about.
But it's not the only welcome improvement.

Updated Packages

As with every new release, packages--applications and software of all kinds--are being updated at a rapid pace. Many of these packages came from an automatic sync from Debian's unstable branch; others have been explicitly pulled in for Ubuntu 16.04.

Linux kernel 4.4

This beta of Xenial ships with Linux kernel 4.4, with the following highlights:

  1. Improved Intel Skylake processor support
  2. 3D support in the virtual GPU driver
  3. New driver for Corsair Vengeance K90
  4. Support for TPM 2.0 chips
  5. Journaled RAID 5 support

Linux Kernel 4.4 also includes less obvious changes including bug fixes and improvements for various file systems, power efficiency, and memory management.
Want a fuller overview of changes? Check out here..!

Core Application Updates

A batch of updates to Ubuntu applications are also included. Among them you'll find:

  1. Firefox 45.0.1
  2. Thunderbird 38.6
  3. LibreOffice 5.1
  4. Chromium 49
  5. Nautilus (aka 'Files') 3.14
  6. Totem (aka 'Videos') 3.18
  7. Rhythmbox 3.3.1
  8. Shotwell 0.22

There are also some new applications included:
  1. GNOME Software 3.19
  2. GNOME Calendar 3.18
  3. Cheese

And several apps have been removed:
  1. Brasero
  2. Empathy
  3. Ubuntu Software Center


Known issues

As is to be expected, with any release, there are some significant known bugs that users may run into with this release of Ubuntu 16.04. The ones we know about at this point (and some of the workarounds), are documented here so you don't need to spend time reporting these bugs again:

Boot, installation and post-install

Errors about swap partition at install

There is a known issue regarding the creation of swap space during the install. In some cases the install will display the error message "Creation of swap space in Partition #n .... failed.". Please see this https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1552539 for more information. You may work around the issue by manually partitioning the disk and removing any existing swap partitions before then creating a new one.

Upgrade

14.04 LTS to 16.04 LTS Desktop Upgrade

There is a known issue which prevents the upgrade of 14.04 LTS to 16.04 LTS. Please see this bug for more information.
At this time you should not attempt to upgrade a production desktop from 14.04 LTS to 16.04 LTS.

Amazingly, there is only one known major bug in the release. Conrad explains, "in some cases, attempts to install to a hard drive that already contains a swap partition may fail in the partitioning phase. If you encounter this, the simplest workaround is to boot to the live session, start a terminal (Ctrl-Alt-T) and type 'sudo swapoff -a'. You can then start the installation from the icon on the desktop and it should proceed successfully".

Graphics and Display

fglrx

The fglrx driver is now deprecated in 16.04, and we recommend its open source alternatives (radeon and amdgpu). AMD put a lot of work into the drivers, and we backported kernel code from Linux 4.5 to provide a better experience.
When upgrading to Ubuntu 16.04 from a previous release, both the fglrx driver and the xorg.conf will be removed, so that the system is set to use either the amdgpu driver or the radeon driver (depending on the available hardware).
More information is available here https://tjaalton.wordpress.com/2016/03/11/no-catalystfglrx-video-driver-in-ubuntu-16-04/

Download Ubuntu 16.04 Beta 2

Beta releases of Ubuntu are not for everyone. If you need a stable system, hate bugs and can’t deal with potential package breakages, wait for the stable release next month.
And if you are keen to try it out stick to running it off of a live USB or DVD. Also note that there is a major bug that may cause beta installs to fail on hard-drives which already contain a swap partition.

You can download a fresh desktop image of Ubuntu 16.04 Final Beta from the Ubuntu releases server

Download Ubuntu 16.04 Beta 2 Now!

update #

Download Stable Version Ubuntu 16.04 LTS From Official Website

Today(21st April 2016) the stable version of Ubuntu 16.04 released.
Now ,You can download and install it in your Linux Box... Enjoy Linux...
Dont't Forget To Share With Your Friends...

You can download a fresh image of Ubuntu 16.04 LTS Final stable release from official Ubuntu website..

For 64 Bit systems :

Download Ubuntu 16.04 LTS Desktop (64 bit)

For 32 Bit systems :

Download Ubuntu 16.04 LTS Desktop (32 bit)

Other Downloads :

For More Download Options Go Here....


Thursday, March 24, 2016

Meet ubuntuBSD, UNIX for Human Beings

UbuntuBSD (Ubuntu Linux + BSD)
The Brand New Open Source OS
To Bring The Two Super Powers Together Into Single OS.


UbuntuBSD ScreenShot

BSD -- the open source, Unix-like operating system kernel that lives in Linux's shadow -- is now coming to the Ubuntu world, thanks to a new open source project called UbuntuBSD.

For the uninitiated, here's the BSD back story: Created starting in the late 1970s (originally as an enhanced version of AT&T's Unix operating system, then as a complete replacement for it) at the University of California, Berkeley, BSD was one of the first freely redistributable operating systems. Then, in the 1990s, for various complicated reasons -- largely but not solely related to legal challenges -- BSD took a back seat to other free operating systems that were based on GNU software and the then-new Linux kernel.

A handful of BSD-based systems have survived into the present: FreeBSD, NetBSD and OpenBSD and so on... But they haven't been nearly as influential, within either the server or desktop markets, as GNU/Linux systems.

Now, a group of programmers wants to combine one of the most popular GNU/Linux distributions, Canonical's Ubuntu, with BSD by developing a new operating system called (plainly enough) UbuntuBSD (also dubbed as "UNIX for Human Beings" on social ntworks). The idea is to deliver "the ease and familiarity of Ubuntu with the rock-solid stability and performance of the FreeBSD kernel," according to the project.

The idea is to deliver , The ease and familiarity of Ubuntu with the rock-solid stability and performance of the FreeBSD kernel

Actually, UbuntuBSD probably won't look too much like Ubuntu to most users. That's because it runs Xfce as its default interface, rather than Unity, the one that ships with Ubuntu. So UbuntuBSD is more like the Ubuntu backend combined with a BSD kernel (specifically, the project is using the FreeBSD kernel) and an alternative interface.

But we're guessing most users won't mind much, especially since Xfce is popular with users who want a desktop interface that is less resource intensive than options like Unity and GNOME, but still looks pretty and delivers extensive functionality.

UbuntuBSD has seen nearly 5,000 downloads already, even though developers warn that it remains in beta form and is not yet ready for production use in all cases.

Look @ UbuntuBSD



In the short term, it's unlikely that UbuntuBSD will see much of a following beyond a certain community of power users who want to experiment with a new open source OS. But the project does provide novel opportunities for integrating the extensive software ecosystem that surrounds Ubuntu with a kernel that offers some features not present in Linux -- not to mention more liberal licensing terms, since the FreeBSD kernel does not use the GNU GPL license. That could lead not just to a new type of desktop OS, but also new server or cloud solutions, down the road.

How To UbuntuBSD




Wednesday, March 23, 2016

BSD For Human Beings? | Interview

BSD For Human Beings? | Interview


BSD for humans -PC BSD

Continuing our BSD summer interviews, we introduce today a very interesting interview with Kris Moore, the founder and current leader of development of the desktop user-friendly PC-BSD project. Enjoy!

As the founder of PC-BSD, what can you tell us about your decision to start this project? How did you get involved with BSD systems, and what drove you into creating one?

I first ran into FreeBSD back in the mid nineties, when I started working at a local dial-up ISP. At the time it was my first experience with any *nix related system, and I can still recall how exciting it was to be able to log in remotely and run "pine" for this new thing called "E-Mail".

I decided to create PC-BSD in early 2005, after running various Linux desktops, and continually having problems with them. In particular what drove me was how package management was performed in all open-source desktops at the time, with the idea that every package is a part of your "base-system", and an upgrade of one thing could trigger a chain of dependencies causing numerous other things to be touched as well. This felt WAY too familiar, especially after coming off the years of "DLL Hell" in Windows desktops and the problems that caused. Instead I wanted to implement a "clean" package system, which kept the applications separated away from the core desktop, reducing the potential for "Libary Hell" and dependency related issues. Thus the PBI (Push Button Installer) package format was born and desiring to build upon a stable API and base-system, FreeBSD was the logical choice.

What is your current role in PC-BSD?

I'm currently the project director, and lead developer of the system.

Why did you choose Free-BSD as the basis for your system?

I was already somewhat familiar with FreeBSD, but the real deciding factor was trying to find a stable "base OS" for the PBI software, something with a very stable ABI, allowing applications to run on current and future versions. Add in the rock-solid stability, security and it was an easy decision.

What goodies from Free-BSD can be found on PC-BSD? Are all the innovative security tools of Free-BSD available on PC-BSD too?

PC-BSD is 100% FreeBSD under the hood, not a fork, which means all the features of FreeBSD, such as ZFS, PF and others are also available. Over the years we have added various graphical "front-ends" to these features, such as a firewall manager, ZFS installation wizards and more. At this point users wanting to install with ZFS will find the PC-BSD graphical installer the easiest and fastest way to go.

I found the installer and the AppCafe very user-friendly. What else is there making user's life easier, while also confirming PC-BSD's desktop aiming?

In addition to the easy Installer and AppCafe, we also include graphical utilities for managing wired/wireless networking, system updates, firewall, and others. Coming in 9.1 is also a revamped FreeBSD "Jail" management utility called the "Warden". It will allow users to easily create and manage sandboxed servers, including linux servers, in a secure FreeBSD jail.

Continue Reading....

Thursday, March 17, 2016

What 's New In Android N


ANDROID N DEVELOPER PREVIEW
What 's New In Android N

Android N

Google surprised everyone by releasing developer preview of its next major verson of Android Operating System known as Android N. The Android N Developer preview offers a look at new features which will be available in the next Android build including multi-window support, brand new notifcation shade, notifcation enhancements, and more...

What is Android N?

Every major release of Android gets a version number and a nickname. Android 6.x is "Marshmallow." Android 5.x is "Lollipop." Android 4.4 is "KitKat." And so on and so forth.
Alphabetically, "N" is next. We don't yet know what version number Android N will be - Android 7.x is a pretty good guess, but not certain, as Google is only predictable in its unpredictability.
And we also don't yet know what the nickname will be. Google chief Hiroshi Lockheimer teased that "We're nut tellin' you yet." Maybe that's leaning toward "Nutella" – which pretty much is the most tasty treat ever to be tasted - or maybe it's some other sort of "nut." Or a red herring. Point is, we don't have any idea just yet.
We do, however, have a fairly good idea for when we'll actually see Android N be released. We've been told to expect five preview builds in total, with the final public release (including the code push to the Android Open Source Project) to come in Q3 2016. That lines up with previous releases, between October and the end of the year.

Android N could make updates easier on everyone

We've been peeking inside the factory images for the Android N developer preview and have noticed what appears to be a pretty major sea change for Android. The code appears to now be arranged to make it easier for device manufacturers to update their own features and settings without disturbing the core Android parts. (At least not nearly as badly.) We're doing a bit of educated guesswork here, but that means a few things.
It would mean less development overhead for manufacturers. That means less time and money getting updated code to your phone. It also means that regular security updates - an increasingly important part of this whole ecosystem - theoretically will be easier to apply, which means you might actually get them if you're on something other than a Nexus device.

So what's new in Android N?



If you had to boil what a major release of Android (or any operating system) means to just a single acronym, it'd be this: APIs. That's short for Application Program Interface, and it's what allows apps to do, well, anything. There are a ton of new ones coming in Android N, and we've only gotten a small taste thus far. More should be announced as we get new preview builds.
Some, however, are more anticipated than others.

Multi-window support

With Android N, Google is adding a much-requested multitasking feature into the OS - multi-window support. The new feature will allow users to pop open two apps on the screen at once and run them side-by-side or one-above-the-other in split screen mode. The company has also added resize option for apps which can be done by dragging the divider between them. Apart from the multi-window support, Google says that users on Android TV devices will be able to put apps in picture-in-picture mode, which will allow them to continue showing content while the user browses or interacts with other apps.

Direct reply notifications and bundled notifications

Android N notification screen

We've been able to interact with notifications for a good long while now. Not every app supports this, but look at Gmail as an early example. Pull down the notification, and archive an email without actually having to open it. Brilliant.
Google Hangouts took this a step further, allowing you to reply to messages right from the notification try, without having to open the app, or the message itself. It's slick.
You'll now be able to have bundled notifications. Or, rather, better bundled notifications. Think off it as more information in one place. Have a half-dozen emails come in through Gmail? You'll be able to see more subject lines at one time in the notification area

Revamped Doze

Google introduced Doze, a system mode that saved battery when the device is idle, in Android 6.0 Marshmallow. Now in Android N, Google has improved the feature to save battery while on the go. The company says that with new restrictions to apps it can save battery when carrying the devices in their pockets.
"A short time after the screen turns off while the device is on battery, Doze restricts network access and defers jobs and syncs," adds Google. It points out that on turning the screen on or plugging in the device will bring the device out of Doze automatically.

Background optimisations

In Android N, Google has introduced Project Svelte which is an effort to optimise the way apps run in the background. The new project is focused on keep a check on background processing that needlessly consume RAM (and battery) and affect system performance for other apps.

Android N's Data Saver

Google has introduced an all-new Data Saver mode, which will help reduce cellular data use by apps. With the new Data Saver mode, users will get more control over how apps use cellular data. Once the Data Saver mode is enabled in Settings, the system will block background data and will also signal apps to use less data "wherever possible." Users will also get an option to select specific apps to run in the background and use data even when Data Saver mode is enabled.

Native Number-blocking Support

One of the notable additions in Android N is native support for number-blocking in the platform. Google says that the new feature will apply to the default messaging app, the default phone app, and third-party provider apps can also read from and write to the blocked-number list. The company stresses that the list will not be accessed by other apps. With number-blocking added to Android N as native feature, developers can take advantage of the support across a wide range of devices. One of the biggest advantages of having the number-blocking feature in system is that blocked numbers will persist across resets and devices through the Backup & Restore feature. The company also introduced Call screening feature in Android N which will allow the default phone app to screen incoming calls.

Multi-locale support, more languages

Google in Android N will let users select multiple locales in Settings for supporting better bilingual use-cases. Apart from multi-locale support, the Android N also expands the languages available to users. Android N will offer over 25 variants each for commonly used languages such as English, Spanish, French, and Arabic while also adds partial support for over 100 new languages.
Some of the accessibility enhancements introduced with Android N includes Vision Settings directly on the Welcome screen for new device setup. The new feature will make it easier for users to configure accessibility features on their device.

Direct boot

This splits things into two groups when you first power up your phone. One group is able to do things before you unlock the device. Apps like SMS messages and alarm clocks and accessibility features may need to use this.


Anything else gets siloed off in a separate storage area until the device is unlocked. That's a very cool preview feature.

Android Beta Program

This is quite helpful option you don’t have to flash your device anymore because Google allows you to register your device and it can receive new updates about the beta releases.It will allow the non tech personals to update and see the latest builds.
Android N will be the next big thing in Smartphone OS market its still not confirmed that it will be named only N or something else for that we have to wait some time until Google releases more updates.

Edging toward Java 8

With Android N, Google is bringing support for new Java 8 features to Android. Using the open-source Java Android Compiler Kit - JACK for short - Google allows developers to use native Java features while building applications.
This means developers won't need to write as much support code - known as "boilerplate" code - when they want to do things like create events that listen for input. Some of the features will be supported back to Gingerbread when using JACK, while others are going to be strictly Android N and above.
Maybe the best news is that Google says they are going to monitor the evolution of Java more closely and support new features while doing everything they can to maintain backwards-compatibility. These are things that the folks building the apps that make Android great love to hear.

Other new features in Android N

Some of the other changes introduced in Android N include a redesigned user interface for system Settings, which now includes a hamburger button on the left side of the screen for quickly jumping to a different option without hitting back to main menu; Android N also adds framework interfaces and platform support for OpenGL ES 3.2; new app switching feature with recent button which also doubles for split-screen option, and Night mode is now available in Android N.


Wednesday, March 9, 2016

Why Node.js

Why The Hell Would I Use Node.js? A Case-by-Case Tutorial 



Introduction

JavaScript’s rising popularity has brought with it a lot of changes, and the face of web development today is dramatically different. The things that we can do on the web nowadays with JavaScript running on the server, as well as in the browser, were hard to imagine just several years ago, or were encapsulated within sandboxed environments like Flash or Java Applets.
Before digging into Node.js, you might want to read up on the benefits of using JavaScript across the stack which unifies the language and data format (JSON), allowing you to optimally reuse developer resources. As this is more a benefit of JavaScript than Node.js specifically, we won’t discuss it much here. But it’s a key advantage to incorporating Node in your stack.
As Wikipedia states: “Node.js is a packaged compilation of Google’s V8 JavaScript engine, the libuv platform abstraction layer, and a core library, which is itself primarily written in JavaScript.” Beyond that, it’s worth noting that Ryan Dahl, the creator of Node.js, was aiming to create real-time websites with push capability, “inspired by applications like Gmail”. In Node.js, he gave developers a tool for working in the non-blocking, event-driven I/O paradigm.

In one sentence: Node.js shines in real-time web applications employing push technology over websockets. What is so revolutionary about that? Well, after over 20 years of stateless-web based on the stateless request-response paradigm, we finally have web applications with real-time, two-way connections, where both the client and server can initiate communication, allowing them to exchange data freely. This is in stark contrast to the typical web response paradigm, where the client always initiates communication. Additionally, it’s all based on the open web stack (HTML, CSS and JS) running over the standard port 80.
One might argue that we’ve had this for years in the form of Flash and Java Applets—but in reality, those were just sandboxed environments using the web as a transport protocol to be delivered to the client. Plus, they were run in isolation and often operated over non-standard ports, which may have required extra permissions and such.
With all of its advantages, Node.js now plays a critical role in the technology stack of many high-profile companies who depend on its unique benefits.
In this post, I’ll discuss not only how these advantages are accomplished, but also why you might want to use Node.js—and why not—using some of the classic web application models as examples.

How Does It Work?

The main idea of Node.js: use non-blocking, event-driven I/O to remain lightweight and efficient in the face of data-intensive real-time applications that run across distributed devices.
That’s a mouthful.
What it really means is that Node.js is not a silver-bullet new platform that will dominate the web development world. Instead, it’s a platform that fills a particular need. And understanding this is absolutely essential. You definitely don’t want to use Node.js for CPU-intensive operations; in fact, using it for heavy computation will annul nearly all of its advantages. Where Node really shines is in building fast, scalable network applications, as it’s capable of handling a huge number of simultaneous connections with high throughput, which equates to high scalability.
How it works under-the-hood is pretty interesting. Compared to traditional web-serving techniques where each connection (request) spawns a new thread, taking up system RAM and eventually maxing-out at the amount of RAM available, Node.js operates on a single-thread, using non-blocking I/O calls, allowing it to support tens of thousands of concurrent connections (held in the event loop).
A quick calculation: assuming that each thread potentially has an accompanying 2 MB of memory with it, running on a system with 8 GB of RAM puts us at a theoretical maximum of 4000 concurrent connections, plus the cost of context-switching between threads. That’s the scenario you typically deal with in traditional web-serving techniques. By avoiding all that, Node.js achieves scalability levels of over 1M concurrent connections (as a proof-of-concept).
There is, of course, the question of sharing a single thread between all clients requests, and it is a potential pitfall of writing Node.js applications. Firstly, heavy computation could choke up Node’s single thread and cause problems for all clients (more on this later) as incoming requests would be blocked until said computation was completed. Secondly, developers need to be really careful not to allow an exception bubbling up to the core (topmost) Node.js event loop, which will cause the Node.js instance to terminate (effectively crashing the program).
The technique used to avoid exceptions bubbling up to the surface is passing errors back to the caller as callback parameters (instead of throwing them, like in other environments). Even if some unhandled exception manages to bubble up, there are mutiple paradigms and tools available to monitor the Node process and perform the necessary recovery of a crashed instance (although you won’t be able to recover users’ sessions), the most common being the Forever module, or a different approach with external system tools upstart and monit.

NPM: The Node Package Manager

When discussing Node.js, one thing that definitely should not be omitted is built-in support for package management using the NPM tool that comes by default with every Node.js installation. The idea of NPM modules is quite similar to that of Ruby Gems: a set of publicly available, reusable components, available through easy installation via an online repository, with version and dependency management.
A full list of packaged modules can be found on the NPM website https://npmjs.org/ , or accessed using the NPM CLI tool that automatically gets installed with Node.js. The module ecosystem is open to all, and anyone can publish their own module that will be listed in the NPM repository. A brief introduction to NPM (a bit old, but still valid) can be found at http://howtonode.org/introduction-to-npm.
Some of the most popular NPM modules today are:
  • express - Express.js, a Sinatra-inspired web development framework for Node.js, and the de-facto standard for the majority of Node.js applications out there today.
  • connect - Connect is an extensible HTTP server framework for Node.js, providing a collection of high performance “plugins” known as middleware; serves as a base foundation for Express.
  • socket.io and sockjs - Server-side component of the two most common websockets components out there today.
  • Jade - One of the popular templating engines, inspired by HAML, a default in Express.js.
  • mongo and mongojs - MongoDB wrappers to provide the API for MongoDB object databases in Node.js.
  • redis - Redis client library.
  • coffee-script - CoffeeScript compiler that allows developers to write their Node.js programs using Coffee.
  • underscore (lodash, lazy) - The most popular utility library in JavaScript, packaged to be used with Node.js, as well as its two counterparts, which promise better performance by taking a slightly different implementation approach.
  • forever - Probably the most common utility for ensuring that a given node script runs continuously. Keeps your Node.js process up in production in the face of any unexpected failures.
The list goes on. There are tons of really useful packages out there, available to all (no offense to those that I’ve omitted here).

Examples of Where Node.js Should Be Used

CHAT

Chat is the most typical real-time, multi-user application. From IRC (back in the day), through many proprietary and open protocols running on non-standard ports, to the ability to implement everything today in Node.js with websockets running over the standard port 80.
The chat application is really the sweet-spot example for Node.js: it’s a lightweight, high traffic, data-intensive (but low processing/computation) application that runs across distributed devices. It’s also a great use-case for learning too, as it’s simple, yet it covers most of the paradigms you’ll ever use in a typical Node.js application.
Let’s try to depict how it works.
In the simplest example, we have a single chatroom on our website where people come and can exchange messages in one-to-many (actually all) fashion. For instance, say we have three people on the website all connected to our message board.
On the server-side, we have a simple Express.js application which implements two things: 1) a GET ‘/’ request handler which serves the webpage containing both a message board and a ‘Send’ button to initialize new message input, and 2) a websockets server that listens for new messages emitted by websocket clients.
On the client-side, we have an HTML page with a couple of handlers set up, one for the ‘Send’ button click event, which picks up the input message and sends it down the websocket, and another that listens for new incoming messages on the websockets client (i.e., messages sent by other users, which the server now wants the client to display).
When one of the clients posts a message, here’s what happens:
  • Browser catches the ‘Send’ button click through a JavaScript handler, picks up the value from the input field (i.e., the message text), and emits a websocket message using the websocket client connected to our server (initialized on web page initialization).
  • Server-side component of the websocket connection receives the message and forwards it to all other connected clients using the broadcast method.
  • All clients receive the new message as a push message via a websockets client-side component running within the web page. They then pick up the message content and update the web page in-place by appending the new message to the board.
This is the simplest example. For a more robust solution, you might use a simple cache based on the Redis store. Or in an even more advanced solution, a message queue to handle the routing of messages to clients and a more robust delivery mechanism which may cover for temporary connection losses or storing messages for registered clients while they’re offline. But regardless of the improvements that you make, Node.js will still be operating under the same basic principles: reacting to events, handling many concurrent connections, and maintaining fluidity in the user experience.

API ON TOP OF AN OBJECT DB

Although Node.js really shines with real-time applications, it’s quite a natural fit for exposing the data from object DBs (e.g. MongoDB). JSON stored data allow Node.js to function without the impedance mismatch and data conversion.
For instance, if you’re using Rails, you would convert from JSON to binary models, then expose them back as JSON over the HTTP when the data is consumed by Backbone.js, Angular.js, etc., or even plain jQuery AJAX calls. With Node.js, you can simply expose your JSON objects with a REST API for the client to consume. Additionally, you don’t need to worry about converting between JSON and whatever else when reading or writing from your database (if you’re using MongoDB). In sum, you can avoid the need for multiple conversions by using a uniform data serialization format across the client, server, and database.

QUEUED INPUTS

If you’re receiving a high amount of concurrent data, your database can become a bottleneck. As depicted above, Node.js can easily handle the concurrent connections themselves. But because database access is a blocking operation (in this case), we run into trouble. The solution is to acknowledge the client’s behavior before the data is truly written to the database.
With that approach, the system maintains its responsiveness under a heavy load, which is particularly useful when the client doesn’t need firm confirmation of a the successful data write. Typical examples include: the logging or writing of user-tracking data, processed in batches and not used until a later time; as well as operations that don’t need to be reflected instantly (like updating a ‘Likes’ count on Facebook) where eventual consistency (so often used in NoSQL world) is acceptable.
Data gets queued through some kind of caching or message queuing infrastructure (e.g., RabbitMQ, ZeroMQ) and digested by a separate database batch-write process, or computation intensive processing backend services, written in a better performing platform for such tasks. Similar behavior can be implemented with other languages/frameworks, but not on the same hardware, with the same high, maintained throughput. In short: with Node, you can push the database writes off to the side and deal with them later, proceeding as if they succeeded.

DATA STREAMING

In more traditional web platforms, HTTP requests and responses are treated like isolated event; in fact, they’re actually streams. This observation can be utilized in Node.js to build some cool features. For example, it’s possible to process files while they’re still being uploaded, as the data comes in through a stream and we can process it in an online fashion. This could be done for real-time audio or video encoding, and proxying between different data sources (see next section).

PROXY

Node.js is easily employed as a server-side proxy where it can handle a large amount of simultaneous connections in a non-blocking manner. It’s especially useful for proxying different services with different response times, or collecting data from multiple source points.
An example: consider a server-side application communicating with third-party resources, pulling in data from different sources, or storing assets like images and videos to third-party cloud services.
Although dedicated proxy servers do exist, using Node instead might be helpful if your proxying infrastructure is non-existent or if you need a solution for local development. By this, I mean that you could build a client-side app with a Node.js development server for assets and proxying/stubbing API requests, while in production you’d handle such interactions with a dedicated proxy service (nginx, HAProxy, etc.).

BROKERAGE - STOCK TRADER’S DASHBOARD

Let’s get back to the application level. Another example where desktop software dominates, but could be easily replaced with a real-time web solution is brokers’ trading software, used to track stocks prices, perform calculations/technical analysis, and create graphs/charts.
Switching to a real-time web-based solution would allow brokers to easily switch workstations or working places. Soon, we might start seeing them on the beach in Florida.. or Ibiza.. or Bali.

APPLICATION MONITORING DASHBOARD

Another common use-case in which Node-with-web-sockets fits perfectly: tracking website visitors and visualizing their interactions in real-time. (If you’re interested, this idea is already being productized by Hummingbird.)
You could be gathering real-time stats from your user, or even moving it to the next level by introducing targeted interactions with your visitors by opening a communication channel when they reach a specific point in your funnel. (If you’re interested, this idea is already being productized by CANDDi.)
Imagine how you could improve your business if you knew what your visitors were doing in real-time—if you could visualize their interactions. With the real-time, two-way sockets of Node.js, now you can.

SYSTEM MONITORING DASHBOARD

Now, let’s visit the infrastructure side of things. Imagine, for example, an SaaS provider that wants to offer its users a service-monitoring page (e.g., GitHub’s status page). With the Node.js event-loop, we can create a powerful web-based dashboard that checks the services’ statuses in an asynchronous manner and pushes data to clients using websockets.
Both internal (intra-company) and public services’ statuses can be reported live and in real-time using this technology. Push that idea a little further and try to imagine a Network Operations Center (NOC) monitoring applications in a telecommunications operator, cloud/network/hosting provider, or some financial institution, all run on the open web stack backed by Node.js and websockets instead of Java and/or Java Applets.
Note: Don't try to build hard real-time systems in Node (i.e., systems requiring consistent response times). Erlang is probably a better choice for that class of application.

Where Node.js Can Be Used

SERVER-SIDE WEB APPLICATIONS

Node.js with Express.js can also be used to create classic web applications on the server-side. However, while possible, this request-response paradigm in which Node.js would be carrying around rendered HTML is not the most typical use-case. There are arguments to be made for and against this approach. Here are some facts to consider:
Pros:
  • If your application doesn’t have any CPU intensive computation, you can build it in Javascript top-to-bottom, even down to the database level if you use JSON storage Object DB like MongoDB. This eases development (including hiring) significantly.
  • Crawlers receive a fully-rendered HTML response, which is far more SEO-friendly than, say, a Single Page Application or a websockets app run on top of Node.js.
Cons:
  • Any CPU intensive computation will block Node.js responsiveness, so a threaded platform is a better approach. Alternatively, you could try scaling out the computation 1.
  • Using Node.js with a relational database is still quite a pain (see below for more detail). Do yourself a favour and pick up any other environment like Rails, Django, or ASP.Net MVC if you’re trying to perform relational operations.

Where Node.js Shouldn’t Be Used

SERVER-SIDE WEB APPLICATION W/ A RELATIONAL DB BEHIND

Comparing Node.js with Express.js against Ruby on Rails, for example, there is a clean decision in favour of the latter when it comes to relational data access.
Relational DB tools for Node.js are still in their early stages; they’re rather immature and not as pleasant to work with. On the other hand, Rails automagically provides data access setup right out of the box together with DB schema migrations support tools and other Gems (pun intended). Rails and its peer frameworks have mature and proven Active Record or Data Mapper data access layer implementations, which you’ll sorely miss if you try to replicate them in pure JavaScript.2
Still, if you’re really inclined to remain JS all-the-way (and ready to pull out some of your hair), keep an eye on Sequelize and Node ORM2—both are still immature, but they may eventually catch up.

HEAVY SERVER-SIDE COMPUTATION/PROCESSING

When it comes to heavy computation, Node.js is not the best platform around. No, you definitely don’t want to build a Fibonacci computation server in Node.js. In general, any CPU intensive operation annuls all the throughput benefits Node offers with its event-driven, non-blocking I/O model because any incoming requests will be blocked while the thread is occupied with your number-crunching.
As stated previously, Node.js is single-threaded and uses only a single CPU core. When it comes to adding concurrency on a multi-core server, there is some work being done by the Node core team in the form of a cluster module [ref: http://nodejs.org/api/cluster.html]. You can also run several Node.js server instances pretty easily behind a reverse proxy via nginx.
With clustering, you should still offload all heavy computation to background processes written in a more appropriate environment for that, and having them communicate via a message queue server like RabbitMQ.
Even though your background processing might be run on the same server initially, such an approach has the potential for very high scalability. Those background processing services could be easily distributed out to separate worker servers without the need to configure the loads of front-facing web servers.
Of course, you’d use the same approach on other platforms too, but with Node.js you get that high reqs/sec throughput we’ve talked about, as each request is a small task handled very quickly and efficiently.

Conclusion

We’ve discussed Node.js from theory to practice, beginning with its goals and ambitions, and ending with its sweet spots and pitfalls. When people run into problems with Node, it almost always boils down to the fact that blocking operations are the root of all evil—99% of Node misuses come as a direct consequence.
Remember: Node.js was never created to solve the compute scaling problem. It was created to solve the I/O scaling problem, which it does really well.
Why use Node.js? If your use case does not contain CPU intensive operations nor access any blocking resources, you can exploit the benefits of Node.js and enjoy fast and scalable network applications. Welcome to the real-time web.
  1. An alternative to these CPU intensive computations is to create a highly scalable MQ-backed environment with back-end processing to keep Node as a front-facing 'clerk' to handle client requests asynchronously.
  2. It’s possible and not uncommon to use Node solely as a front-end, while keeping your Rails back-end and its easy-access to a relational DB.

Source: TOPTAL