GNOME...

GNOME (pronounced /gəˈnəʊm/ in RP or pronounced /gəˈnoʊm/ in the US/Canada)[1] is a desktop environment—the graphical user interface which runs on top of a computer operating system—composed entirely of free software. It is an international project that includes creating software development frameworks, selecting application software for the desktop, and working on the programs which manage application launching, file handling, and window and task management.

GNOME is part of the GNU Project and can be used with various Unix-like operating systems, most notably those built on top of the Linux kernel and the GNU system, and as part of Java Desktop System in Solaris.

The name originally stood for GNU Network Object Model Environment.

Aims

The GNOME project provides two things: The GNOME desktop environment, an intuitive and attractive desktop for users, and the GNOME development platform, an extensive framework for building applications that integrate into the rest of the desktop.
— GNOME website[2]

The GNOME project puts heavy emphasis on simplicity, usability, and making things “just work”. The other aims of the project are:

  • Freedom—to create a desktop environment that will always have the source code available for re-use under a free software license.
  • Accessibility—ensuring the desktop can be used by anyone, regardless of technical skill or physical disability.
  • Internationalization and localization—making the desktop available in many languages. At the moment GNOME is being translated to over 100 languages.[3]
  • Developer-friendliness—ensuring it is easy to write software that integrates smoothly with the desktop, and allow developers a free choice of programming language.
  • Organization—a regular release cycle and a disciplined community structure.
  • Support—ensuring backing from other institutions beyond the GNOME community.

History

In 1996, the KDE project was started. KDE was free software from the start, but members of the GNU project were concerned with KDE's dependence on the then non-free Qt widget toolkit. In August 1997, two projects were started in response to this issue: the Harmony toolkit (a free replacement for the Qt libraries) and GNOME (a different desktop not using Qt, but built entirely on top of free software).[4] The initial project leaders for GNOME were Miguel de Icaza and Federico Mena.

In place of the Qt toolkit, GTK+ was chosen as the base of the GNOME desktop. GTK+ uses the GNU Lesser General Public License (LGPL), a free software license that allows GPL-incompatible software (including proprietary software) to link to it. The GNOME desktop itself is licensed under the LGPL for its libraries, and the GPL for applications that are part of the GNOME project. Having the toolkit and libraries under the LGPL allowed applications written for GNOME to use a much wider set of licenses (including proprietary software licenses).[5]

In 1998, Qt became open source. While Qt was dual-licensed under both the QPL and the GPL, the freedom to link proprietary software with GTK+ at no charge made it differ from Qt. With Qt licensed under the GPL, the Harmony Project stopped its efforts at the end of 2000, as KDE did not depend on non-free software anymore. In contrast, as of 2009, the development of GNOME has not stopped.

On March 2009, Qt 4.5 was released, adding another licensing option, the LGPL.

Name

The name “GNOME” was proposed as an acronym of GNU Network Object Model Environment by Elliot Lee, one of the authors of ORBit and the Object Activation Framework.[citation needed] It refers to GNOME’s original intention of creating a distributed object framework similar to Microsoft’s OLE.[6] This no longer reflects the core vision of the GNOME project, and the full expansion of the name is now considered obsolete. As such, some members of the project advocate dropping the acronym and re-naming “GNOME” to “Gnome”.[7]

Evolution

Project structure

As with most free software projects, the GNOME project is loosely managed. Discussion chiefly occurs on a number of public mailing lists.[8]

In August 2000 the GNOME Foundation was set up to deal with administrative tasks and press interest and to act as a contact point for companies interested in developing GNOME software. While not directly involved in technical decisions, the Foundation does coordinate releases and decide which projects will be part of GNOME. Membership is open to anyone who has made a non-trivial contribution to the project.[9] Members of the Foundation elect a board of directors every November, and candidates for the positions must be members themselves.

Developers and users of GNOME gather at an annual meeting known as GUADEC in order to discuss the current state of the project and its future direction.[10]

GNOME often incorporates standards from freedesktop.org into itself to allow GNOME applications to appear more integrated into other desktops (and vice versa), and encourages cooperation as well as competition.

Major subprojects

GNOME is built from a large number of different projects. A few of the major ones are listed below:

  • Bonobo – a (obsolete in current releases) compound document technology.
  • GConf – for storing application settings.
  • GVFS – a virtual file system.
  • GNOME Keyring – for storing encryption keys and security information.
  • GNOME Translation Project – translate documentation and applications into different languages.
  • GTK+ – a widget toolkit used for constructing graphical applications. The use of GTK+ as the base widget toolkit allows GNOME to benefit from certain features such as theming (the ability to change the look of an application) and smooth anti-aliased graphics. Sub-projects of GTK+ provide object-oriented programming support (GObjects), extensive support of international character sets and text layout (Pango) and accessibility (ATK). GTK+ reduces the amount of work required to port GNOME applications to other platforms such as Windows and Mac OS X.
  • Human interface guidelines (HIG) – research and documentation on building easy-to-use GNOME applications.
  • LibXML – an XML library.
  • ORBit – a CORBA ORB for software componentry.

A number of language bindings are available allowing applications to be written in a variety of programming languages, such as C++ (gtkmm), Java (java-gnome), Ruby (ruby-gnome2), C#, (Gtk#), Python (PyGTK), Perl (gtk2-perl) and many others. The only languages currently used in applications that are part of an official GNOME desktop release are C, C# and Python.

Look and feel

GNOME is designed around the traditional computing desktop metaphor. Its handling of windows, applications and files is similar to that of contemporary desktop operating systems. In its default configuration, the desktop has a launcher menu for quick access to installed programs and file locations; open windows may be accessed by a taskbar along the bottom of the screen and the top-right corner features a notification area for programs to display notices while running in the background. However these features can be moved to almost anywhere the user desires, replaced with other functions or removed altogether.

GNOME uses Metacity as its default window manager. Users can change the appearance of their desktop through the use of themes, which are sets consisting of an icon set, window manager border and GTK+ theme engine and parameters. Popular GTK+ themes include Bluecurve and Clearlooks (the current default theme).

GNOME puts emphasis on being easy for everyone to use. The HIG helps guide developers in producing applications which look and behave similarly, in order to provide a cohesive GNOME interface.

Usability

Since GNOME v2.0, a key focus of the project has been usability. As a part of this, the GNOME Human Interface Guidelines (HIG) were created, which is an extensive guide for creating quality, consistent and usable GUI programs, covering everything from GUI design to recommended pixel-based layout of widgets.

During the v2.0 rewrite, many settings were deemed to be of little or no value to the majority of users and were removed. For instance, the preferences section of the Panel were reduced from a dialog of six tabs to one with two tabs. Havoc Pennington summarized the usability work in his 2002 essay "Free Software UI", emphasizing the idea that all preferences have a cost, and it's better to "unbreak the software" than to add a UI preference to do that:[12]

A traditional free software application is configurable so that it has the union of all features anyone's ever seen in any equivalent application on any other historical platform. Or even configurable to be the union of all applications that anyone's ever seen on any historical platform (Emacs *cough*).

Does this hurt anything? Yes it does. It turns out that preferences have a cost. Of course, some preferences also have important benefits - and can be crucial interface features. But each one has a price, and you have to carefully consider its value. Many users and developers don't understand this, and end up with a lot of cost and little value for their preferences dollar.


Releases

Each of the parts making up the GNOME project has its own version number and release schedule. However, individual module maintainers coordinate their efforts to create a full GNOME stable release on a roughly six-month schedule.

The releases listed in the table below are classed as stable.

Version Date Information

August 1997[13] GNOME development announced
1.0 March 1999[14] First major GNOME release
1.0.53 October 1999[15] "October"
1.2 May 2000[16] "Bongo"
1.4 April 2001[17] "Tranquility"
2.0 June 2002[18] Major upgrade based on GTK2. Introduction of the Human Interface Guidelines.
2.2 February 2003[19] Multimedia and file manager improvements.
2.4 September 2003[20] "Temujin": Epiphany, accessibility support.
2.6 March 2004[21] Nautilus changes to a spatial file manager, and a new GTK+ file dialog is introduced. A short-lived fork of GNOME, GoneME, is created as a response to the changes in this version.
2.8 September 2004[22] Improved removable device support, adds Evolution.
2.10 March 2005[23] Lower memory requirements and performance improvements. Adds: new panel applets (modem control, drive mounter and trashcan); and the Totem and Sound Juicer applications
2.12 September 2005[24] Nautilus improvements; improvements in cut/paste between applications and freedesktop.org integration. Adds: Evince PDF viewer; New default theme: Clearlooks; menu editor; keyring manager and admin tools. Based on GTK+ 2.8 with cairo support.
2.14 March 2006[25] Performance improvements (over 100% in some cases); usability improvements in user preferences; GStreamer 0.10 multimedia framework. Adds: Ekiga video conferencing application; Deskbar search tool; Pessulus lockdown editor; Fast user switching; Sabayon system administration tool.
2.16 September 2006[26] Performance improvements. Adds: Tomboy notetaking application; Baobab disk usage analyser; Orca screen reader; GNOME Power Manager (improving laptop battery life); improvements to Totem, Nautilus; compositing support for Metacity; new icon theme. Based on GTK+ 2.10 with new print dialog.
2.18 March 2007[27] Performance improvements. Adds: Seahorse GPG security application, allowing encryption of emails and local files; Baobab disk usage analyser improved to support ring chart view; Orca screen reader; improvements to Evince, Epiphany and GNOME Power Manager, Volume control; two new games, GNOME Sudoku and glchess. MP3 and AAC audio encoding.
2.20 September 2007[28] Tenth anniversary release. Evolution backup functionality; improvements in Epiphany, EOG, GNOME Power Manager; password keyring management in Seahorse. Adds: PDF forms editing in Evince; integrated search in the file manager dialogs; automatic multimedia codec installer.
2.22 March 2008[29] Addition of Cheese, a tool for taking photos from webcams and Remote Desktop Viewer; basic window compositing support in Metacity; introduction of GVFS; improved playback support for DVDs and YouTube, MythTV support in Totem; internationalised clock applet; Google Calendar support and message tagging in Evolution; improvements in Evince, Tomboy, Sound Juicer and Calculator.
2.24 September 2008[30] Addition of the Empathy instant messenger, Ekiga 3.0, tabbed browsing in Nautilus, better multiple screens support and improved digital TV support.
2.26 March 2009[31] New Disc Burning application Brasero, simpler file sharing, media player improvements, support for multiple monitors and fingerprint reader support.

Source code

GNOME releases are made to the ftp.gnome.org FTP server[32] in the form of source code with configure scripts, which are compiled by operating system vendors and integrated with the rest of their systems before distribution. Most vendors use only stable and tested versions of GNOME, and provide it in the form of easily installed, pre-compiled packages. The source code of every stable and development version of GNOME is stored in the GNOME Subversion source code repository.[33]

A number of build-scripts (such as JHBuild or GARNOME) are available to help automate the process of compiling the source code.

Future developments

There are many sub-projects under the umbrella of the GNOME project, and not all of them are currently included in GNOME releases. Some are considered purely experimental concepts, or for testing ideas that will one day migrate into stable GNOME applications; others are code that is being polished for direct inclusion.

GNOME 3.0

The next version of the desktop environment was officially announced at the 2008 GUADEC conference held in Istanbul in July. Release has been targeted for 2010, in place of version 2.30 of the current branch. Although the desktop will undergo a major revision, changes planned so far are mostly incremental.[34]

Usage

GNOME is the default desktop environment for several Linux distributions, most notably Debian, Fedora and Ubuntu.

For derived and other distributions, see Comparison of Linux distributions.


Do you know Linus Torvalds?

In Indonesian click here

Linus Benedict Torvalds [ˈliːnɵs ˈtuːrvalds]; born December 28, 1969 in Helsinki, Finland) is a Finnish software engineer best known for having initiated the development of the Linux kernel. He later became the chief architect of the Linux kernel, and now acts as the project's coordinator.


Biography

Early years

Linus Torvalds was born in Helsinki, Finland, the son of journalists Anna and Nils Torvalds,[2] and the grandson of poet Ole Torvalds. His family belongs to the Swedish-speaking minority (5.5%) of Finland's population. Torvalds was named after Linus Pauling, the American Nobel Prize-winning chemist, although in the book Rebel Code: Linux and the Open Source Revolution, Torvalds is quoted as saying, "I think I was named equally for Linus the Peanuts cartoon character," noting that this makes him half "Nobel-prize-winning chemist" and half "blanket-carrying cartoon character".[3] Both of his parents were campus radicals at the University of Helsinki in the 1960s.

Torvalds attended the University of Helsinki from 1988 to 1996, graduating with a master's degree in computer science. His M.Sc. thesis was titled Linux: A Portable Operating System. His academic career was interrupted after completing his first year of study when he joined the Finnish Army, selecting the 11-month officer training program, thus fulfilling the mandatory military service of Finland. In the army he held the rank of second lieutenant, with the role of fire controller, calculating positions of guns, targets, and trajectories, finally telling the guns where to shoot.[4] In 1990, he resumed his university studies, and was exposed to UNIX for the first time, in the form of a DEC MicroVAX running ULTRIX.[5] In June 2000, the University of Helsinki issued Torvalds an honorary doctorate.[6]

His interest in computers began with a Commodore VIC-20.[7] After the VIC-20 he purchased a Sinclair QL which he modified extensively, especially its operating system. He programmed an assembly language and a text editor for the QL, as well as a few games.[8] He is known to have written a Pac-Man clone named Cool Man. On January 2, 1991 he purchased an Intel 80386-based IBM PC[9] and spent a month playing the game Prince of Persia before receiving his MINIX copy which in turn enabled him to begin his work on Linux.[3]

Later years

Linus Torvalds is married to Tove Torvalds (née Monni) — a six-time Finnish national karate champion — whom he first met in the autumn of 1993.[10] Torvalds was running introductory computer laboratory exercises for students and instructed the course attendants to send him an e-mail as a test, to which Tove responded with an e-mail asking for a date.[3] Tove and Linus were later married and have three daughters, Patricia, Daniela, and Celeste.[11]

After a visit to Transmeta in late 1996,[1] he accepted a position at the company in California, where he would work from February 1997 through June 2003. He then moved to the Open Source Development Labs, which has since merged with the Free Standards Group to become the Linux Foundation, under whose auspices he continues to work. In June 2004, Torvalds and his family moved to Portland, Oregon to be closer to the consortium's Beaverton, Oregon-based headquarters.

From 1997 to 1999 he was involved in 86open helping to choose the standard binary format for Linux and Unix.

Red Hat and VA Linux, both leading developers of Linux-based software, presented Torvalds with stock options in gratitude for his creation.[12] In 1999, both companies went public and Torvalds' net worth shot up to roughly $20 million.[13][14]

His personal mascot is a penguin nicknamed Tux, which has been widely adopted by the Linux community as the mascot of the Linux kernel.

Torvalds generally stays out of non-kernel-related debates. Although Torvalds believes that "open source is the only right way to do software", he also has said that he uses the "best tool for the job", even if that includes proprietary software.[15] He has been criticized for his use and alleged advocacy of the proprietary BitKeeper software for version control in the Linux kernel. However, Torvalds has since written a free-software replacement for BitKeeper called Git. Torvalds has commented on official GNOME developmental mailing lists that, in terms of desktop environments, he encourages users to switch to KDE.[16][17] However, Torvalds thinks KDE 4.0 was a "disaster" because of its lack of maturity, so he switched temporarily to GNOME.[18]

Udah kenalan ama Linus Torvalds?

Linus Benedict Torvalds (lahir 28 Desember 1969 di Helsinki, Finlandia) adalah rekayasawan perangkat lunak Finlandia yang dikenal sebagai perintis pengembangan kernel Linux. Ia sekarang bertindak sebagai koordinator proyek tersebut.


Linux terinsipirasi oleh Minix (suatu sistem operasi yang dikembangkan oleh Andrew S. Tanenbaum) untuk mengembangkan suatu sistem operasi mirip-Unix (Unix-like) yang dapat dijalankan pada suatu PC. Linux sekarang dapat dijalankan pada berbagai arsitektur lain.


Ketika Linus Torvalds, seorang mahasiswa Finlandia pendiam membagi-bagikan kode sumber (source code) kernel Linux seukuran disket via internet di tahun 1991, ia sama sekali tidak menduga bahwa apa yang dimulainya melahirkan sebuah bisnis bernilai milyaran dolar di kemudian hari.


Ia bahkan tidak menduga Linux kemudian menjadi sistem operasi paling menjanjikan, yang bisa dibenamkan ke dalam server, komputer desktop, tablet PC, PDA, handphone, GPS, robot, mobil hingga pesawat ulang alik buatan NASA.


Tidak hanya itu, banyak maniak Linux (Linuxer) yang membeli perangkat buatan Apple dan mengganti sistem operasinya dengan Linux. Bagi saya itu sedikit gila, mengingat menghapus sistem operasi Mac & iPod berarti membuang duit dan menggantinya sistem operasinya cukup sulit dibanding desktop berbasis Windows. Saat ini 20% pangsa pasar desktop di seluruh dunia menggunakan Linux jauh di atas Machintosh dan terus mengejar desktop Windows. Dan 12,7% server di seluruh dunia menggunakan Linux, jauh di atas UNIX, BSD, Solaris, dan terus meningkat menggerus pangsa pasar server Microsoft.


Saat ini Linus meninggalkan posisi menjanjikan di perusahaan semi konduktor Transmeta dan tinggal bersama istri dan 3 anaknya di sebuah bukit di desa di Portland, Oregon, USA, berdekatan dengan markas Open Source Development Labs. Organisasi nirlaba ini diawaki oleh 20-an programmer yang punya gairah hampir sama dengan Linus. Mereka terus mengembangkan kernel Linux yang kini berukuran 290-an MegaBytes atau melebihi 9 milyar baris kode. Linux beserta timnya menerima masukan baris-baris kode dari seluruh penjuru dunia, menyortir, menetapkan skala prioritas dan memasukkan gagasan paling brilian ke dalam kernel. LSD sendiri disokong oleh puluhan raksasa IT seperti IBM, HP, Dell dan Sun, baik dari sisi materi maupun sumber daya manusia.


Linus bukan orang pertama yang membagi-bagikan source code karena pola ini adalah hal yang biasa di masa awal tumbuhnya industri komputer. Tapi Linus sukses menetapkan standar yang memaksa banyak pengembang ikut membebaskan kode sumber program mereka, mulai dari BSD, Solaris, Suse, Java hingga Adobe.


Meski hanya bergaji ratusan ribu dolar pertahun, Linus telah menciptakan banyak multimilyuner dalam industri komputer mulai dari RedHat, Suse, Debian, Mandriva, Ubuntu dan banyak developer software open source lainnya. Hampir tak ada yang berubah dari Linus. Ketika ia datang terlambat di suatu konferensi IT, ia bahkan tak segan-segan duduk di lantai dengan celana pendek dan sepatu-sandal kesukaannya. Ia bahkan tidak marah tatkala memberikan pidato di mimbar dan diinterupsi oleh beberapa programmer BSD yang maju ke depan panggung yang mengklaim bahwa kernel BSD jauh lebih hebat ketimbang kernel Linux. Ia bahkan tidak segan-segan memakai T-Shirt BSD yang disodorkan pemrotes dan melanjutkan pidatonya.


Menurut Linus, apa yang dilakukannya hanyalah untuk berbagi. Berbeda dengan Richard M Stallman yang fanatik dengan konsep free software, Linus hanya menekankan sisi keterbukaan (open), tak peduli apakah kemudian dalam suatu sistem operasi bercampur program free dan proprietery.


Setiap kata-kata Linus hampir menjadi sabda di kalangan Linuxer yang menciptakan standar nilai tertentu. Setiap publikasi, pidato, email dab press releasenya selalu ditunggu-tunggu jutaan orang. Di sela kesibukannya, Linus menyempatkan diri bersepeda menuruni bukit dan minum di bar desa. Bila ada nabi dalam dunia komputer, bisa dipastikan itu Linus (dan Steve Wozniak). Dan setannya tentu Bill Gates :)

TOT Linux Server File dan Data Server Angkatan ke 4

Kegiatan kerjasama RISTEK – PT. ARDELINDO, dalam rangka Pelatihan TOT Linux Server & Data Center Angkatan ke 4 yang merupakan bagian dari 8 seri pelatihan TOT tingkat lanjut. Telah berlangsung tanggal 19 Maret 2009 di Lab Test Bad Ristek. Tujuan diselenggarakannya kegiatan ini adalah untuk meningkatkan kemampuan SDM dibidang Linux Server File dan Data Center (Linux System Administrator), dengan materi File Sharing, SMB (Samba) dan Server Data Center.

Pelatihan kali ini dihadiri oleh 15 peserta dari berbagai instansi swasta, pemerintah, peneliti dan lembaga pendidikan, antara lain peserta berasal dari perwakilan PT. Inti Ganda Perdana, PT. Supra Primatama Nusantara - Biznet, PT. Bangsawan Cyberindo, FK Univ. YARSI, Institut Teknologi Indonesia, Perguruan Islam Darussalam, SMKN 2 Temanggung JATENG, SMP/SMA Budi Mulia Karawang, Dewan Riset Nasional, UPT. Balai Pengolahan Mineral Lampung - LIPI, LAPAN, BPPT, DEPDAGRI. Diharapkan dari kegiatan ini peserta dapat meningkatkan kemampuannya dan menggunakannya diinstansinya masing-masing.

USU ingin menjadi WORLD CLASS University

Universitas Sumatera Utara sebagai salah satu PTN di Indonesia ingin mendapatkan predikat "World Class University" dengan temanya University for Industry oleh karena itu dalam rangka ingin mendapatkannya sekarang USU sedang gencar-gencarnya memperpaiki segala sistem universitasnya. sebagiaman yang disampaikan oleh Rektor USU Prof. Chairuddin P.Lubis, DTMA, DSAk menyampaikan bahwa USU akan melakukan pengembangan muatan situs website USU, termasuk content (isi) untuk semua mata kuliah dalam aplikasi USU e-Learning dan karya civitas akademika dalam USU Repositori. Menurut Rektor dengan website tersebut nantinya akan banyak penilaian yang dilakukan oleh Badan Akreditasi Nasional (BAN) dengan hanya mengunjungi website USU. Ditambahkan bahwa pengayaan isi website mutlak perlu dilakukan agar apa yang ada di USU dapat tergambar dengan sebenarnya dalam website tersebut. Selain itu USU juga akan melakukan penataan kampus dibidang pertamanan dan keindahan kampus agar kampus USU sejuk dan layak dipandang

Salah satu hal yang harus dipenuhi adalah harus adanya e-learning yang diasuh oleh para dosen dan dapat diakses oleh mahasiswa baik dalam maupun luar USU. Tetapi sepertiya e-learning USU masih belum banyak berkembang.


University of North Sumatera

Daemon Tools

Daemon Tools (styled DAEMON Tools by its creators) is a disk image emulator and optical disc authoring program for Microsoft Windows. Daemon Tools was originally a furtherance in the development of another program, Generic SafeDisc emulator, and incorporated all of its features. The program is able to defeat most copy protection schemes such as SafeDisc and SecuROM.[1].It is currently compatible with Windows XP and Windows Vista.

Contents

Supported file types

As of January 2008, the following image formats are supported:[2]

Editions

Versions prior to v4.00 had only one edition. That edition was freeware, had no adware, and was solely an imaging disc-emulation software (no image conversion, creation, burning, and so forth). Version 3.47 is the last such version.

Since version 4.00, four editions of the product exist: Lite [Commercial], Pro Standard and Pro Advanced. A feature comparison is given below:[3]

Feature Lite [Commercial] DAEMON Tools Pro Standard/Advanced Evaluation Pro Standard Pro Advanced
Graphical user interface Yes (Mount'n'Drive manager) Yes Yes Yes
Shell extensions Yes Yes Yes Yes
Image creation Yes (without preset profiles) Yes Yes Yes
Command-line interface Yes Yes Yes Yes
Maximum number of virtual SCSI CD/DVD devices 4 16 / 32 16 32
Maximum number of virtual IDE CD/DVD devices 0 0 / 2 0 2
Image mounting to the virtual devices Yes Yes Yes Yes
Image mounting to the physical folders No Yes Yes Yes
Image collection management No Yes Yes Yes
Image compression/encryption No Yes Yes Yes
System Tray Agent Yes Yes Yes Yes
Virtual devices' properties monitoring No Yes Yes Yes
Image converter No No / Yes No Yes
Included advertising software None None None None
Cost-free? Yes (non-commercial use) Yes (20-days evaluation period) No No

Blacklisting

Some software publishers go to great lengths to disable or frustrate Daemon Tools. For example, some games will check whether the Daemon Tools driver is loaded, and if so will take some action, such as uninstalling the toolset altogether. New releases of Daemon Tools take various measures to ensure the functionality of the application. For example, revision 4.06 randomizes the name of the virtual driver installed by the software.[citation needed]

Daemon Tools currently uses rootkit technology to hide from other applications and the operating system itself. This often leads to false reports by antivirus and anti-rootkit software (such as RootkitRevealer).[4]

Y.A.S.U.

Y.A.S.U (Yet Another SecuROM Utility) is a very small tool that works as an "SCSI-drive protector". It was created by sYk0, who also created CureROM (but CureROM uses an alternative method to protect SCSI drives).

It’s a simple utility that can be used to hide emulated drives from SecuROM 7 and SafeDisc 4. YASU is a companion program for Daemon Tools and currently being hosted, supported and maintained by the Daemon Tools team and copybase.org.


Parameter Arrays

If you take a look at the classes found in the common language runtime (CLR), you’ll find more than a few with methods that can accept a variable-length list of parameters. One example would be the System.Console.WriteLine method, which has an overloaded declaration that works to support replaceable parameters in the string written to the console. For example, this code:
Console.WriteLine("{0} jumped over {1}.", "The cow", "the moon");

produces the following output:
The cow jumped over the moon.

Any number of replaceable parameters can be specified in this fashion, which means that the number of arguments passed to the WriteLine method can vary from call to call. C# supports this behavior with the params keyword, which, when used before an array type in a function’s argument list, creates a parameter array. You can use this array to fake optional parameters in practice. Check out Listing A for an example of this in action.

You can see that this solution works pretty well. The OptionalStrings method may be legally called with no parameters, in which case the parameter array args is simply empty—in effect, the entire array is optional. Further, the calling function doesn’t need to explicitly wrap the parameters it sends in an array, and since the parameter array is an honest-to-goodness array, the called function can easily determine how many parameters it has received. But there are a few caveats:
  • There’s no enforceable limit to the number of arguments received in this way. You couldn’t, for instance, declare the OptionalStrings method to receive a maximum of three optional arguments without writing code to do so at runtime inside the function itself.
  • Similarly, there’s no way to make the array typesafe. If you need to support multiple types in the parameter array, you’re limited to using a lowest-common-denominator approach, usually declaring the parameter array as type Object.
  • Only one parameter may be marked using params, and it must be the last parameter in the method’s argument list.
  • You can't specify an optional out (passed by reference) parameter using this method.
An object-oriented solution
Another possible solution would be to create a class encapsulating all the possible arguments a method could receive, and pass an instance of that class to the method in question. This approach makes sense from an object-oriented point of view and solves the problems I pointed out with the parameter array solution, as you can see from Listing B.

By creating a default, parameterless constructor for the ParameterClass class, I can set whatever default values I want for the public fields, which represent the possible parameters for the OptionalObjects method. I just override any of the fields I’m interested in actually providing a value for, and pass the whole object to OptionalObjects. Because objects are passed by reference, any changes made to a ParameterClass field while inside the OptionalObjects method are reflected when the method returns. In effect, the whole object is an out parameter.

Not only does this provide a neat solution to the optional parameter conundrum, but passing arguments in the form of objects also serves to further insulate your classes from one another. It’s possible, using this method, to add additional arguments for OptionalObjects with a minimum of fuss. Simply redefining ParameterClass to contain the new fields is all that’s required.

Not perfect, but it works
None of these solutions is perfect, but they all enable you to fake your way into supporting optional parameters in your applications. Since the last word received from Microsoft seems to indicate that built-in optional parameter support will not be forthcoming, we’ll have to get by with workarounds like these.

Programming Language

A programming language is a machine-readable artificial language designed to express computations that can be performed by a machine, particularly a computer. Programming languages can be used to create programs that specify the behavior of a machine, to express algorithms precisely, or as a mode of human communication.

Many programming languages have some form of written specification of their syntax and semantics, since computers require precisely defined instructions. Some (such as C) are defined by a specification document (for example, an ISO Standard), while others (such as Perl) have a dominant implementation.

The earliest programming languages predate the invention of the computer, and were used to direct the behavior of machines such as automated looms and player pianos. Thousands of different programming languages have been created, mainly in the computer field,[1] with many more being created every year.

Definitions

Traits often considered important for constituting a programming language:

  • Target: Programming languages differ from natural languages in that natural languages are only used for interaction between people, while programming languages also allow humans to communicate instructions to machines. Some programming languages are used by one device to control another. For example PostScript programs are frequently created by another program to control a computer printer or display.

Some authors restrict the term "programming language" to those languages that can express all possible algorithms;[6] sometimes the term "computer language" is used for more limited artificial languages.

Non-computational languages, such as markup languages like HTML or formal grammars like BNF, are usually not considered programming languages. A programming language (which may or may not be Turing complete) may be embedded in these non-computational (host) languages.

Usage

A programming language provides a structured mechanism for defining pieces of data, and the operations or transformations that may be carried out automatically on that data. A programmer uses the abstractions present in the language to represent the concepts involved in a computation. These concepts are represented as a collection of the simplest elements available (called primitives). [7]

Programming languages differ from most other forms of human expression in that they require a greater degree of precision and completeness. When using a natural language to communicate with other people, human authors and speakers can be ambiguous and make small errors, and still expect their intent to be understood. However, figuratively speaking, computers "do exactly what they are told to do", and cannot "understand" what code the programmer intended to write. The combination of the language definition, a program, and the program's inputs must fully specify the external behavior that occurs when the program is executed, within the domain of control of that program.

Programs for a computer might be executed in a batch process without human interaction, or a user might type commands in an interactive session of an interpreter. In this case the "commands" are simply programs, whose execution is chained together. When a language is used to give commands to a software application (such as a shell) it is called a scripting language[citation needed].

Many languages have been designed from scratch, altered to meet new needs, combined with other languages, and eventually fallen into disuse. Although there have been attempts to design one "universal" computer language that serves all purposes, all of them have failed to be generally accepted as filling this role.[8] The need for diverse computer languages arises from the diversity of contexts in which languages are used:

  • Programs range from tiny scripts written by individual hobbyists to huge systems written by hundreds of programmers.
  • Programmers range in expertise from novices who need simplicity above all else, to experts who may be comfortable with considerable complexity.
  • Programs must balance speed, size, and simplicity on systems ranging from microcontrollers to supercomputers.
  • Programs may be written once and not change for generations, or they may undergo nearly constant modification.
  • Finally, programmers may simply differ in their tastes: they may be accustomed to discussing problems and expressing them in a particular language.

One common trend in the development of programming languages has been to add more ability to solve problems using a higher level of abstraction. The earliest programming languages were tied very closely to the underlying hardware of the computer. As new programming languages have developed, features have been added that let programmers express ideas that are more remote from simple translation into underlying hardware instructions. Because programmers are less tied to the complexity of the computer, their programs can do more computing with less effort from the programmer. This lets them write more functionality per time unit.[9]

Natural language processors have been proposed as a way to eliminate the need for a specialized language for programming. However, this goal remains distant and its benefits are open to debate. Edsger Dijkstra took the position that the use of a formal language is essential to prevent the introduction of meaningless constructs, and dismissed natural language programming as "foolish".[10] Alan Perlis was similarly dismissive of the idea.[11]

Elements

All programming languages have some primitive building blocks for the description of data and the processes or transformations applied to them (like the addition of two numbers or the selection of an item from a collection). These primitives are defined by syntactic and semantic rules which describe their structure and meaning respectively.

Syntax

Parse tree of Python code with inset tokenization
Syntax highlighting is often used to aid programmers in recognizing elements of source code. The language above is Python.

A programming language's surface form is known as its syntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, there are some programming languages which are more graphical in nature, using visual relationships between symbols to specify a program.

The syntax of a language describes the possible combinations of symbols that form a syntactically correct program. The meaning given to a combination of symbols is handled by semantics (either formal or hard-coded in a reference implementation). Since most languages are textual, this article discusses textual syntax.

Programming language syntax is usually defined using a combination of regular expressions (for lexical structure) and Backus-Naur Form (for grammatical structure). Below is a simple grammar, based on Lisp:

expression ::= atom | list
atom ::= number | symbol
number ::= [+-]?['0'-'9']+
symbol ::= ['A'-'Z''a'-'z'].*
list ::= '(' expression* ')'

This grammar specifies the following:

  • an expression is either an atom or a list;
  • an atom is either a number or a symbol;
  • a number is an unbroken sequence of one or more decimal digits, optionally preceded by a plus or minus sign;
  • a symbol is a letter followed by zero or more of any characters (excluding whitespace); and
  • a list is a matched pair of parentheses, with zero or more expressions inside it.

The following are examples of well-formed token sequences in this grammar: '12345', '()', '(a b c232 (1))'

Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless ill-formed, per the language's rules; and may (depending on the language specification and the soundness of the implementation) result in an error on translation or execution. In some cases, such programs may exhibit undefined behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by the person who wrote it.

Using natural language as an example, it may not be possible to assign a meaning to a grammatically correct sentence or the sentence may be false:

  • "Colorless green ideas sleep furiously." is grammatically well-formed but has no generally accepted meaning.
  • "John is a married bachelor." is grammatically well-formed but expresses a meaning that cannot be true.

The following C language fragment is syntactically correct, but performs an operation that is not semantically defined (because p is a null pointer, the operations p->real and p->im have no meaning):

complex *p = NULL;
complex abs_p = sqrt (p->real * p->real + p->im * p->im);

The grammar needed to specify a programming language can be classified by its position in the Chomsky hierarchy. The syntax of most programming languages can be specified using a Type-2 grammar, i.e., they are context-free grammars.[12]

Static semantics

The static semantics defines restrictions on the structure of valid texts that are hard or impossible to express in standard syntactic formalisms.[13] The most important of these restrictions are covered by type systems.

Type system

A type system defines how a programming language classifies values and expressions into types, how it can manipulate those types and how they interact. This generally includes a description of the data structures that can be constructed in the language. The design and study of type systems using formal mathematics is known as type theory.

Typed versus untyped languages

A language is typed if the specification of every operation defines types of data to which the operation is applicable, with the implication that it is not applicable to other types.[14] For example, "this text between the quotes" is a string. In most programming languages, dividing a number by a string has no meaning. Most modern programming languages will therefore reject any program attempting to perform such an operation. In some languages, the meaningless operation will be detected when the program is compiled ("static" type checking), and rejected by the compiler, while in others, it will be detected when the program is run ("dynamic" type checking), resulting in a runtime exception.

A special case of typed languages are the single-type languages. These are often scripting or markup languages, such as Rexx or SGML, and have only one data type—most commonly character strings which are used for both symbolic and numeric data.

In contrast, an untyped language, such as most assembly languages, allows any operation to be performed on any data, which are generally considered to be sequences of bits of various lengths.[14] High-level languages which are untyped include BCPL and some varieties of Forth.

In practice, while few languages are considered typed from the point of view of type theory (verifying or rejecting all operations), most modern languages offer a degree of typing.[14] Many production languages provide means to bypass or subvert the type system.

Static versus dynamic typing

In static typing all expressions have their types determined prior to the program being run (typically at compile-time). For example, 1 and (2+2) are integer expressions; they cannot be passed to a function that expects a string, or stored in a variable that is defined to hold dates.[14]

Statically-typed languages can be either manifestly typed or type-inferred. In the first case, the programmer must explicitly write types at certain textual positions (for example, at variable declarations). In the second case, the compiler infers the types of expressions and declarations based on context. Most mainstream statically-typed languages, such as C++, C# and Java, are manifestly typed. Complete type inference has traditionally been associated with less mainstream languages, such as Haskell and ML. However, many manifestly typed languages support partial type inference; for example, Java and C# both infer types in certain limited cases.[15]

Dynamic typing, also called latent typing, determines the type-safety of operations at runtime; in other words, types are associated with runtime values rather than textual expressions.[14] As with type-inferred languages, dynamically typed languages do not require the programmer to write explicit type annotations on expressions. Among other things, this may permit a single variable to refer to values of different types at different points in the program execution. However, type errors cannot be automatically detected until a piece of code is actually executed, making debugging more difficult. Ruby, Lisp, JavaScript, and Python are dynamically typed.

Weak and strong typing

Weak typing allows a value of one type to be treated as another, for example treating a string as a number.[14] This can occasionally be useful, but it can also allow some kinds of program faults to go undetected at compile time and even at run time.

Strong typing prevents the above. An attempt to perform an operation on the wrong type of value raises an error.[14] Strongly-typed languages are often termed type-safe or safe.

An alternative definition for "weakly typed" refers to languages, such as Perl, JavaScript, and C++, which permit a large number of implicit type conversions. In JavaScript, for example, the expression 2 * x implicitly converts x to a number, and this conversion succeeds even if x is null, undefined, an Array, or a string of letters. Such implicit conversions are often useful, but they can mask programming errors.

Strong and static are now generally considered orthogonal concepts, but usage in the literature differs. Some use the term strongly typed to mean strongly, statically typed, or, even more confusingly, to mean simply statically typed. Thus C has been called both strongly typed and weakly, statically typed.[16][17]

Execution semantics

Once data has been specified, the machine must be instructed to perform operations on the data. The execution semantics of a language defines how and when the various constructs of a language should produce a program behavior.

For example, the semantics may define the strategy by which expressions are evaluated to values, or the manner in which control structures conditionally execute statements.

Core library

Most programming languages have an associated core library (sometimes known as the 'Standard library', especially if it is included as part of the published language standard), which is conventionally made available by all implementations of the language. Core libraries typically include definitions for commonly used algorithms, data structures, and mechanisms for input and output.

A language's core library is often treated as part of the language by its users, although the designers may have treated it as a separate entity. Many language specifications define a core that must be made available in all implementations, and in the case of standardized languages this core library may be required. The line between a language and its core library therefore differs from language to language. Indeed, some languages are designed so that the meanings of certain syntactic constructs cannot even be described without referring to the core library. For example, in Java, a string literal is defined as an instance of the java.lang.String class; similarly, in Smalltalk, an anonymous function expression (a "block") constructs an instance of the library's BlockContext class. Conversely, Scheme contains multiple coherent subsets that suffice to construct the rest of the language as library macros, and so the language designers do not even bother to say which portions of the language must be implemented as language constructs, and which must be implemented as parts of a library.

Practice

A language's designers and users must construct a number of artifacts that govern and enable the practice of programming. The most important of these artifacts are the language specification and implementation.

Specification

The specification of a programming language is intended to provide a definition that the language users and the implementors can use to determine whether the behavior of a program is correct, given its source code.

A programming language specification can take several forms, including the following:

  • An explicit definition of the syntax, static semantics, and execution semantics of the language. While syntax is commonly specified using a formal grammar, semantic definitions may be written in natural language (e.g., the C language), or a formal semantics (e.g., the Standard ML[18] and Scheme[19] specifications).
  • A description of the behavior of a translator for the language (e.g., the C++ and Fortran specifications). The syntax and semantics of the language have to be inferred from this description, which may be written in natural or a formal language.
  • A reference or model implementation, sometimes written in the language being specified (e.g., Prolog or ANSI REXX[20]). The syntax and semantics of the language are explicit in the behavior of the reference implementation.

Implementation

An implementation of a programming language provides a way to execute that program on one or more configurations of hardware and software. There are, broadly, two approaches to programming language implementation: compilation and interpretation. It is generally possible to implement a language using either technique.

The output of a compiler may be executed by hardware or a program called an interpreter. In some implementations that make use of the interpreter approach there is no distinct boundary between compiling and interpreting. For instance, some implementations of the BASIC programming language compile and then execute the source a line at a time.

Programs that are executed directly on the hardware usually run several orders of magnitude faster than those that are interpreted in software.[citation needed]

One technique for improving the performance of interpreted programs is just-in-time compilation. Here the virtual machine, just before execution, translates the blocks of bytecode which are going to be used to machine code, for direct execution on the hardware.

History

A selection of textbooks that teach programming, in languages both popular and obscure. These are only a few of the thousands of programming languages and dialects that have been designed in history.

Early developments

The first programming languages predate the modern computer. The 19th century had "programmable" looms and player piano scrolls which implemented what are today recognized as examples of domain-specific programming languages. By the beginning of the twentieth century, punch cards encoded data and directed mechanical processing. In the 1930s and 1940s, the formalisms of Alonzo Church's lambda calculus and Alan Turing's Turing machines provided mathematical abstractions for expressing algorithms; the lambda calculus remains influential in language design.[21]

In the 1940s, the first electrically powered digital computers were created. The first high-level programming language to be designed for a computer was Plankalkül, developed for the German Z3 by Konrad Zuse between 1943 and 1945.

Programmers of early 1950s computers, notably UNIVAC I and IBM 701, used machine language programs, that is, the first generation language (1GL). 1GL programming was quickly superseded by similarly machine-specific, but mnemonic, second generation languages (2GL) known as assembly languages or "assembler". Later in the 1950s, assembly language programming, which had evolved to include the use of macro instructions, was followed by the development of "third generation" programming languages (3GL), such as FORTRAN, LISP, and COBOL. 3GLs are more abstract and are "portable", or at least implemented similar on computers that do not support the same native machine code. Updated versions of all of these 3GLs are still in general use, and each has strongly influenced the development of later languages.[22] At the end of the 1950s, the language formalized as Algol 60 was introduced, and most later programming languages are, in many respects, descendants of Algol.[22] The format and use of the early programming languages was heavily influenced by the constraints of the interface.[23]

Refinement

The period from the 1960s to the late 1970s brought the development of the major language paradigms now in use, though many aspects were refinements of ideas in the very first Third-generation programming languages:

Each of these languages spawned an entire family of descendants, and most modern languages count at least one of them in their ancestry.

The 1960s and 1970s also saw considerable debate over the merits of structured programming, and whether programming languages should be designed to support it.[26] Edsger Dijkstra, in a famous 1968 letter published in the Communications of the ACM, argued that GOTO statements should be eliminated from all "higher level" programming languages.[27]

The 1960s and 1970s also saw expansion of techniques that reduced the footprint of a program as well as improved productivity of the programmer and user. The card deck for an early 4GL was a lot smaller for the same functionality expressed in a 3GL deck.

Consolidation and growth

The 1980s were years of relative consolidation. C++ combined object-oriented and systems programming. The United States government standardized Ada, a systems programming language intended for use by defense contractors. In Japan and elsewhere, vast sums were spent investigating so-called "fifth generation" languages that incorporated logic programming constructs.[28] The functional languages community moved to standardize ML and Lisp. Rather than inventing new paradigms, all of these movements elaborated upon the ideas invented in the previous decade.

One important trend in language design during the 1980s was an increased focus on programming for large-scale systems through the use of modules, or large-scale organizational units of code. Modula-2, Ada, and ML all developed notable module systems in the 1980s, although other languages, such as PL/I, already had extensive support for modular programming. Module systems were often wedded to generic programming constructs.[29]

The rapid growth of the Internet in the mid-1990s created opportunities for new languages. Perl, originally a Unix scripting tool first released in 1987, became common in dynamic Web sites. Java came to be used for server-side programming. These developments were not fundamentally novel, rather they were refinements to existing languages and paradigms, and largely based on the C family of programming languages.

Programming language evolution continues, in both industry and research. Current directions include security and reliability verification, new kinds of modularity (mixins, delegates, aspects), and database integration.[citation needed]

The 4GLs are examples of languages which are domain-specific, such as SQL, which manipulates and returns sets of data rather than the scalar values which are canonical to most programming languages. Perl, for example, with its 'here document' can hold multiple 4GL programs, as well as multiple JavaScript programs, in part of its own perl code and use variable interpolation in the 'here document' to support multi-language programming.[30]

Measuring language usage

It is difficult to determine which programming languages are most widely used, and what usage means varies by context. One language may occupy the greater number of programmer hours, a different one have more lines of code, and a third utilize the most CPU time. Some languages are very popular for particular kinds of applications. For example, COBOL is still strong in the corporate data center, often on large mainframes; FORTRAN in engineering applications; C in embedded applications and operating systems; and other languages are regularly used to write many different kinds of applications.

Various methods of measuring language popularity, each subject to a different bias over what is measured, have been proposed:

  • counting the number of job advertisements that mention the language[31]
  • the number of books sold that teach or describe the language[32]
  • estimates of the number of existing lines of code written in the language—which may underestimate languages not often found in public searches[33]
  • counts of language references (i.e., to the name of the language) found using a web search engine.

Combining and averaging information from various internet sites, langpop.com claims that [34] in 2008 the 10 most cited programming languages are (in alphabetical order): C, C++, C#, Java, JavaScript, Perl, PHP, Python, Ruby, and SQL.

Taxonomies

There is no overarching classification scheme for programming languages. A given programming language does not usually have a single ancestor language. Languages commonly arise by combining the elements of several predecessor languages with new ideas in circulation at the time. Ideas that originate in one language will diffuse throughout a family of related languages, and then leap suddenly across familial gaps to appear in an entirely different family.

The task is further complicated by the fact that languages can be classified along multiple axes. For example, Java is both an object-oriented language (because it encourages object-oriented organization) and a concurrent language (because it contains built-in constructs for running multiple threads in parallel). Python is an object-oriented scripting language.

In broad strokes, programming languages divide into programming paradigms and a classification by intended domain of use. Paradigms include procedural programming, object-oriented programming, functional programming, and logic programming; some languages are hybrids of paradigms or multi-paradigmatic. An assembly language is not so much a paradigm as a direct model of an underlying machine architecture. By purpose, programming languages might be considered general purpose, system programming languages, scripting languages, domain-specific languages, or concurrent/distributed languages (or a combination of these).[35] Some general purpose languages were designed largely with educational goals.[36]

A programming language may also be classified by factors unrelated to programming paradigm. For instance, most programming languages use English language keywords, while a minority do not. Other languages may be classified as being esoteric or not.

About this blog