A Free JavaScript Speed Boost for Everyone!

A Free JavaScript Speed Boost for Everyone!: „An exciting development in the world of DOM scripting is the W3C Selector API. Up until now, using the DOM Level 2 API, the only way to obtain references to HTML elements in the DOM was to use either document.getElementById or document.getElementsByTagName and filter the results manually. With the rise of CSS, JavaScript programmers asked the obvious question, ‘If the browser has a really fast way of selecting HTML elements that match CSS selectors why can’t we?’.

The Selector API defines the querySelector, and querySelectorAll methods that take a CSS selector string and return the first matching element or a StaticNodeList of matching elements, respectively. The methods can be called from the document object in order to select elements from the whole document or a specific HTML element to select only from descendants of that element.

To illustrate how much easier your life will be using the Selector API, have a look at this example HTML:

<ul id='menu'>
  <li>
    <input type='checkbox' name='item1_done' id='item1_done'> 
    <label for='item1_done'>bread</label>
  </li>
  <li class='important'>
    <input type='checkbox' name='item2_done' id='item2_done'> 
    <label for='item2_done'>milk</label>
  </li>
  <!-- imagine more items -->
</ul>

Our task is to check all the checkboxes for the list items that have the class ‘important’. Using only DOM Level 2 methods we could do it this way:

var items = document.getElementById('menu').getElementsByTagName('li');
for(var i=0; i < items.length; i++) {
  if(items[i].className.match(/important/)) {
    if(items[i].firstChild.type == 'checkbox') {
      items[i].firstChild.checked = true;
    }
  }
}

Using the new selector API we can simplify it to this:

var items = document.querySelectorAll('#menu li.important input[type='checkbox']');
for(var i=0; i < items.length; i++) {
  items[i].checked = true;
}

That’s much nicer! The methods also support selector grouping — multiple selectors separated by commas. The Selector API is working right now in Safari 3.1, the Internet Explorer 8 beta, and Firefox 3.1 alpha1. Opera is also working on adding support for the API.

If you’re a fan of one of the many JavaScript libraries available you’re probably thinking ‘But, I can already do that.’ One of the great examples of the benefits of using JavaScript libraries are the implementations of CSS selectors found in nearly all of them. Recently we’ve seen huge speed improvements in the CSS selector implementations because library authors have been sharing their techniques. So what’s the benefit of using the Selector API? In a word: speed — native implementations are fast! And better yet all of the javascript libraries are poised to benefit. jQuery and Prototype are already developing implementations that make use of the Selector API, while The Dojo Toolkit, DOMAssistant and base2 have already made use of it.

There’s a reason why those 3 libraries were the first ones to benefit. Kevin talked about the potential problem back in Tech Times #190 in the article titled ‘Is Your JavaScript Library Standards Compliant?’ The Selector API makes use of standard CSS selectors so if the browser doesn’t support a certain selector then you won’t be able to use it. The libraries that have already taken advantage of the Selector API are those that only supported standard CSS selectors. For those libraries, supporting the API was (almost) as easy as doing this:

if(document.querySelector) {
  return document.querySelector(selector);
} else {
  return oldSelectorFunction(selector);
}

Libraries that support custom selectors will have more work to do. The risk is that if you have used custom CSS selectors extensively in your project, it may be difficult for your chosen library to pass on the speed benefit to you because the library will have to use its default selector instead of the Selector API. If the library somehow rewires its custom selectors so that they can utilize the Selector API, the secondary risk is increased code bloat.

Hopefully the Selector API will encourage the use of standard CSS selectors over custom ones. In fact if uptake of the new browser versions is good and the performance benefits of the new Selector API are compelling enough we could see custom selector functionality moved to supplementary libraries you only need to use in case of legacy compatibility requirements.

Dean Edwards’ base2 Library has the nicest implementation in my opinion. Base2 implements the API exactly, which means you can write your JavaScript using standard the standard API methods — Base2 only creates custom querySelector and querySelectorAll methods if the browser doesn’t support them. You can’t get any cleaner than that. Base2 does, however, implement the non-standard ‘!=’ attribute selector in it’s custom selector function, apparently because of peer pressure, so it’ll have to have points deducted for that.

Regardless of whether you use a JavaScript library or roll your own, the new browser implementations of the Selector API give everyone an instant speed boost. We all win, hooray!“

(Via SitePoint Blogs.)

TYPO3-Podcast: Grafisches Text-Menü per TypoScript erzeugen

TYPO3-Podcast: Grafisches Text-Menü per TypoScript erzeugen: „typo3-podcast: grafisches text-menü per typoscript erzeugen

Im aktuellen Podcast von TYPO3-Gründer Kasper Skårhøj geht es um die Erstellung eines TypoScript-basierten grafischen Text-Menüs. Dirk Paulus und Jochen Stange von der Bad Dürkheimer Webagentur „die medienagenten‘ stellen das gemeinsam mit Arnd Messer, Geschäftsführer der Wilhelmsfelder Webagentur…

Dirk Paulus und Jochen Stange zeigen in der gut sieben Minuten langen Podcast-Folge eine Möglichkeit, per TypoScript ein grafisches Textmenü zu erzeugen, dessen Aussehen und Verhalten per CSS gesteuert werden. Dabei kommt das Menü ohne JavaScript aus und ist damit barrierefrei.

Das Menü besteht aus zwei Teilen: einem grafischen Menü und einem verbesserten Layer-Menü. Die Demosite www.gtmenu.de zeigt es im Einsatz. Der für das Menü erforderliche Code steht auf snipplr zum Download bereit.

Jochen Stange weist in dem Podcast ausdrücklich darauf hin, dass der Einsatz eines solchen Menüs erst ab TYPO3-Version 4.1 fehlerfrei funktioniert. Erst ab dieser Version ist das TypoScript-Array „{TSFE:lastImgResourceInfo|0}‘ verfügbar, durch den die Abmaße eines Menüpunkts (Breite und Höhe) direkt als Stlye-Eigenschaft in den Menüpunkt geschrieben werden können.

Der Podcast wurde während der letzten TYPO3-Snowboardtour aufgezeichnet, die Ende März in Laax in der Schweiz stattgefunden hat.

(Via t3n.yeebase.com – Open Source, Web & TYPO3.)

T3Cast zu DEV3: Eclipse basierte FLOW3- und TYPO3-Entwicklung

T3Cast zu DEV3: Eclipse basierte FLOW3- und TYPO3-Entwicklung: „David Brühlmeier ist Robert Lemke per Skype aus einem Park in Zürich zugeschaltet. In knapp 20 Minuten zeigt Brühlmeier Robert Lemke beispielhaft, wie die von ihm im Rahmen seiner Masterarbeit geschaffene Entwicklungsumgebung DEV3 zu bedienen ist.

Gezeigt werden alle für die Arbeit notwendigen Schritte: Nach der eigentlichen Installation des Plugins wird die weitere Einrichtung gezeigt. Darüber hinaus geben Robert Lemke und David Brühlmeier einen Einblick in die Arbeit mit DEV3.

Das Projekt DEV3 verfolgt das Ziel, Entwicklern eine auf Eclipse und den PHP Development Tools (PDT) basierende Entwicklungsumgebung für TYPO3 zur Verfügung zu stellen.

Entstanden ist DEV3 aus den zuvor unabhängig gestarteten Projekten tyClipse von Sebastian Böttger und Eckhard M. Jäger sowie FLOW3DE von David Brühlmeier. Auf Anraten von TYPO3 5.0– und FLOW3-Entwicklungsleiter Robert Lemke haben sich die Projekte zur gemeinsamen Arbeit entschlossen.“

(Via t3n.yeebase.com – Open Source, Web & TYPO3.)

Is It Time to Ditch IE6?

Is It Time to Ditch IE6?: „On August 27, 2001, almost exactly 7 years ago, Microsoft unleashed Internet Explore 6 upon the world. Despite version 7 having been out now for almost two years, and version 8 already in public beta, usage of the 2001 release remains strong. W3Counter reports that it is still the most popular browser in the world at 34.6% of all visits, while TheCounter.com has it second to IE7, but only barely and still commanding a whopping 36% market share.

Because so many people still use the older version of Internet Explorer, many web sites have made the choice to continue supporting it (including SitePoint — where about 12% of our visitors still come to us using IE6). But is it perhaps time to ditch IE6 support and start forcing people to upgrade?

Web application developer 37signals made the decision to drop IE6 support in July (actual support for Microsoft’s last generation browser ceased on August 15). ‘IE 6 can’t provide the same web experience that modern browsers can,’ wrote 37signals of the decision. ‘Continued support of IE 6 means that we can’t optimize our interfaces or provide an enhanced customer experience in our apps. Supporting IE 6 means slower progress, less progress, and, in some places, no progress.’

According to 37signals, supporting IE6 was holding them back. And 37signals isn’t alone in their dislike of IE6. In 2006, a few months before Microsoft released their last major browser, PC World magazine ranked Internet Explorer 6 as the 8th worst tech product of all time, citing its terrible track record when it comes to security.

Security is such a big issue for IE6, that one blogger recently reported that 95% of all bots accessing his site use Internet Explorer 6 as their user-agent. ‘Most blog spam comes from bots that either fake or, as a trojan, use Internet Explorer 6 of infected systems,’ he wrote, ultimately deciding to block IE6 completely to alleviate the blog spam problem.

Of course, security isn’t the only reason web developers are sour on IE6. Internet Explorer 6 is also dismal when it comes to standards compliance. So why do people continue to use it? As Nick La wrote a year ago, the reason people still use IE6 is that developers go out of their way to make web sites work in it. So most people don’t realize that IE6 isn’t a good browser.

‘We all know that IE6 is outdated and has horrible CSS rendering engine. However, most average Internet users haven’t realized that yet. Why? Because we put our hard work on it and patch the bugs by various IE hacks,’ La wrote, urging people to drop support for IE6.

A third of the Internet is a lot of people to just leave behind, though. So support for IE6 continues at most web sites, especially large ones. What we need to move us forward, however, is a bold move, not too much unlike the one Apple made in 2001 when it decided to forgo backwards compatibility when it released OS X. In order to save the Internet from IE6, perhaps we need to stop supporting it.

What do you think? Should web developers stop supporting Internet Explorer 6? Vote in our poll and then leave your thoughts in the comments below.

Note: There is a poll embedded within this post, please visit the site to participate in this post’s poll.

(Via SitePoint Blogs.)

A Closer Look at YUI 3.0 PR 1: Dav Glass’s Draggable Portal Example

A Closer Look at YUI 3.0 PR 1: Dav Glass’s Draggable Portal Example: „

YUI 3.0 Preview Release 1 was made available on Wednesday, and with it we provided a look at how the next major iteration of YUI is taking shape. Among the elements we shipped with the preview is a new example from Dav Glass, the Draggable Portal, which exercises a broad cross section of the preview’s contents.

The Portal Example in the YUI 3.0 preview release.

The Draggable Portal is a common design pattern in which content modules on the page can be repositioned, minimized, removed, and re-added to the page. The state of the modules persists in the background, so a reload of the page or a return to the page calls up the modules in their most recently positioned state. You see variations of this design pattern on many personlizable portals like My Yahoo, NetVibes, and iGoogle.

In this article, we’ll take a look under the hood of this example to get a richer sense of YUI’s 3.x codeline and the idioms and patterns it establishes. We’re just pulling out some specific code snippets to examine here, but you can review the full code source for this exampleand for 66 others — on the YUI 3 website.

(more…)

(Via Yahoo! User Interface Blog.)

Why People Pirate Software

Why People Pirate Software: „

Cliff Harris, the man behind one-man UK computer game development shop Positech, wondered recently why people were pirating his games. So a few days ago, Harris posted on his blog asking people to tell him why they downloaded his games without paying. Harris said his only motive was to learn about why people do it, and promised to ‘read every single [email], and keep an open mind.’ He promised not to rat anyone out for pirating.

Harris’ blog post got 206 comments and hundreds of emails — long ones, he said. ‘Few people wrote under 100 words. Some people put tolstoy to shame. It seems a lot of people have waited a long time to tell a game developer the answer to this question,’ he said.

Today, Techdirt noticed that Harris had posted his promised summary and response.

So why do people pirate software (specifically games)?

  • Money – ‘A LOT of people cited the cost of games as a major reason for pirating. Many were kids with no cash and lots of time to play games, but many were not,’ wrote Harris. Positech’s games are priced between $19-23, and Harris said that he was surprised that so many people thought that was too high.
  • Quality – ‘Although there were many and varied complaints about tech support, game stability, bugs and system requirements, it was interesting to hear so many complaints about actual game design and gameplay,’ Harris said. Many people agreed that though today’s games look fantastic, they ‘got boring too quickly, were too derivative, and had gameplay issues.’ Another quality complaint: Demos are too short and people feel that they’re often not representative of the final product.
  • DRM – ‘People don’t like DRM, we knew that, but the extent to which DRM is turning away people who have no other complaints is possibly misunderstood. If you wanted to change ONE thing to get more pirates to buy games, scrapping DRM is it.’
  • Ease – Writes Harris: ‘Lots of people claimed to pirate because it was easier than going to shops. Many of them said they pirate everything that’s not on [Valve’s] Steam. Steam got a pretty universal thumbs up from everyone.’ (Harris said that he would love to get his games on Steam, but it’s not open to everyone.)
  • Because I Can – 5% of the replies, said Harris, came from people who admitted that stealing games online was easy to do because it was easy to get away with.

To his credit, Harris did just what he said he would and considered the responses he fielded. He plans to make a number of changes, including ditching DRM completely, creating longer game demos, considering a drop in price (though he seems most hesitant about that change), and working harder to create higher quality games. ‘I’ve gone from being demoralized by pirates to actually inspired by them, and I’m working harder than ever before on making my games fun and polished,’ he wrote.

One of the lessons to be learned from Harris, beyond the interesting look into the reasons why people pirate software, is the value of having a good corporate blog. We wrote last week that properly done a corporate blog can have tremendous value. Harris’ Positech blog proves that. By opening the channels of communication with his customers and users, Harris was able to get honest feedback that he is putting to good use to make himself more money and make his customers happier.

(Via SitePoint Blogs.)

W3C Releases Mobile Web Best Practices

W3C Releases Mobile Web Best Practices: „

The World Wide Web Consortium (W3C) today released the 1.0 version of their Mobile Web Best Practices document. The guidelines offer mobile web developers a consistent set of best practices to apply when creating content for consumption on mobile devices. ‘The principal objective is to improve the user experience of the Web when accessed from [mobile web] devices,’ according to the W3C.

In Japan, there are already more mobile web users than PC users, and the rest of the world is catching up. Jupiter Research expects that mobile Web 2.0 revenues will hit $22.4 billion by 2014, with the biggest growth areas in mobile social networking and user generated content.

Developing content across such a wide array of mobile devices and creating a consistent and enjoyable user experience is not an easy task. The W3C hopes that its new mobile best practices guidelines will make it easier for developers to create content and applications for cell phones and other mobile devices.

‘Mobile Web content developers now have stable guidelines and maturing tools to help them create a better mobile Web experience,’ said Dominique Hazaël-Massieux, W3C Mobile Web Activity Lead in a press release. ‘In support of the W3C mission of building One Web, we want to support the developer community by providing tools to enable a great mobile Web user experience.’

The W3C also announced the release of the XHTML Basic 1.1 Recommendation today as the preferred markup language for the best practices document. ‘Until today, content developers faced an additional challenge: a variety of mobile markup languages to choose from,’ said the W3C. ‘With the publication of the XHTML Basic 1.1 Recommendation today, the preferred format specification of the Best Practices, there is now a full convergence in mobile markup languages, including those developed by the Open Mobile Alliance (OMA).’“

(Via SitePoint Blogs.)

Working With History in Bash

Working With History in Bash: „

Yesterday we talked about favorite bash features (on the ##textmate IRC channel). I figured it was worth posting mine to this blog, they mostly revolve around history, hence the title.

Setup

My shell history collects a lot of complex command invocations which take time to figure out. To ensure that I have access to them at a later time, I have the following 3 lines in my bash init:

export HISTCONTROL=erasedups
export HISTSIZE=10000
shopt -s histappend

The first one will remove duplicates from the history (when a new item is added). For example if you switch between running make and ./a.out in a shell, you may later find that the last 100 or so history items is a mix of these two commands. Not very useful.

The second one increase the history size. With duplicates erased, the history already holds a lot more actual information, but I still like to increase the default size of only 1,000 items.

The third line ensures that when you exit a shell, the history from that session is appended to ~/.bash_history. Without this, you might very well lose the history of entire sessions (rather weird that this is not enabled by default).

History Searching

Now that I have my history preserved nicely in ~/.bash_history there are a few ways to search it.

Using Grep

The most crude is grep. You can do:

history|grep iptables

For me (on this particular Linux server) I get:

4599  iptables -N http-block
4600  iptables -A http-block -s 58.60.43.196 -j DROP
4601  iptables -A INPUT -p tcp --dport 80 -j http-block
4602  iptables -L http-block
4603  iptables-save -c
4604  history|grep iptables

I do this often enough to have an alias for history (which is just h).

From the output I can either copy/paste the stuff I want, or repeat a given history event. You’ll notice that each history event has a number, you can repeat e.g. event number 4603 simply by running:

!4603

I will write a bit more about referencing history events in History Expansion.

Prefix Searching

Similar to how you can press arrow up for the previous history event, there is a function you can invoke for the previous history event with the same prefix as what is to the left of the insertion point.

This function is called history-search-backward and by default does not have a key equivalent. So to actually reach this function, I have the following in ~/.inputrc (or /etc/inputrc when I control the full system):

'\ep': history-search-backward

This places the function on P (escape P). So if I want to repeat the iptables-save -c history event we found in previous section, all I do is type ipt and hit P. If it finds a later event with the same prefix, hit P again to go further back.

This functionality is offered by the readline library, so if you setup this key, you have access to prefix searching in all commands which use this library.

Incremental Search

It is possible to press ⌃R to do an incremental (interactive) search of the history.

Personally I am not a big fan of this feature, so I will leave it at that :)

Update: The reason I dislike ⌃R is both because the interactive stuff just seems to get in the way (when P is what I need 99% of the time) and because it fails in cases where I ‘switch shell’, for example I may do: ssh mm press return, then instantly type: fP and again hit return (to execute free -m on the server called mm). I enter this before the connection to the server has been fully established, and here ⌃R would have been taken by the local shell, but it is the shell history at the server I want to search.

History Expansion

History Expansion was what we did above when we ran !4603. It is a DSL for referencing history events and optionally run transformations on these.

Anyone interested in this should run man bash and search for History Expansion, but just to give you a feel for what it is, I will reference a subset of the manual and provide a few examples.

Event Designators

First, an event designator starts with ! and then the event we want to reference. This can be:

«n»      Reference event number «n».
-«n»     Go «n» events back.
!        Last line (this is the default).
#        Current line.
«text»   Last event starting with «text».
?«text»  Last event containing «text».

So if we want to re-run our iptables-save -c we can do: !ipt.

What’s more useful though is to use history references as part of larger commands.

For example take this example:

% which ruby
/usr/bin/ruby
% ls -l $(!!)
lrwxr-xr-x  1 root  wheel  76 30 Oct  2007 /usr/bin/ruby -> ../../System/Library/Frameworks/Ruby.framework/Versions/Current/usr/bin/ruby

Or something like:

% make some_target
(no errors)
% path/to/target some arguments
(no errors)
% !-2 && !-1

Word Designators

In the previous section we referenced entire history events. It is possible to reference just a subset of a history event by appending a : to the event designator and then the word of interest, the two most useful are:

«n»      Reference the «n»’th word.
$        Reference the last word.

So for example we can do:

% mkdir -p /path/to/our/www-files
(no errors)
% chown www:www !$
(no errors)

Here we reference last word of last line. We can also reference stuff on the same line, e.g.:

% cp /path/to/important/file !#:1_backup

To reference the last word of last line one can also press _ which will immediately insert that word.

Modifiers

To make history substitution even more useful (and harder to remember), one can also add a modifier to the event designator.

The most useful modifiers are in my experience :h and :t, these are head and tail respectively or better know as dirname and basename.

An example could be:

% ls -l /path/to/some/file
(listing of file)
% cd !$:h
(change directory to that of file)

Brace Expansion

Somewhat related to the backup example where we reference the first argument as !#:1 and append _backup to this, another approach is bracket expansion.

Anywhere on a command line, one can write {a,b,c} which will expand to the 3 words a, b, and c. If we include a prefix/suffix, that will be part of each of the expanded words. We can also leave the word in the braces empty, and have it expand to just the prefix/suffix, so for example we can do:

% cp /path/to/important/file{,_backup}

This is functionally equivalent to:

% cp /path/to/important/file !#:1_backup

But lack of hardcoded word number is IMO an improvement.

(Via TextMate Blog.)

Leben und Arbeiten mit Social Software und Web 2.0

Anlässlich der tiefgreifenden Veränderungen, die Social Software und Web 2.0 mit sich bringen, hat die MFG Innovationsagentur für IT und Medien des Landes Baden Württemberg die Publikation „a digital lifestyle – leben und arbeiten mit social software“ herausgebracht, die wesentliche Aspekte dieses Wandels beleuchtet.

Auf über 80 Seiten gehen namhafte Autoren aus Wissenschaft und Praxis der Frage nach, wie sich unser Lebensstil und unsere Arbeitswelt in Punkto Kooperation, Interaktion und Partizipation in der digitalen Zukunft entwickeln werden.

Die informative und zugleich interessant gestaltete Publikation wird kostenlos zum Download angeboten und ist zudem als Printversion für eine Schutzgebühr von 15 Euro erwerbbar. Sie eignet sich als Einstieg in das Thema für interessierte Privatpersonen und Unternehmen als Überblick über Möglichkeiten, die das Web 2.0 bietet.

http://www.digital-lifestyle.mfg-innovation.de/?page_id=34

Web Form Factory: Open Source Formular-Generator

Formulare ermöglichen einen direkten Kontakt zu den Besuchern einer Seite und erfragen zum Beispiel Daten für Newsletter, Kommentare und Adressen. Das Design ist schnell erledigt, doch ohne PHP-Kenntnisse wird daraus kein nützliches Formular. Wer gerade keinen PHP-Entwickler zur Hand hat, der kann auf die Web Form Factory zurückgreifen.
Web Form Factory: Open Source Generator für Formulare

Die Erstellung von Formularen für eine Webseite bleibt in aller Regel entsprechend geschulten Entwicklern vorbehalten. Die optische Gestaltung stellt für viele Publisher keine große Hürde dar, bei der Programmierung der Formulare sieht das schon anders aus. Hier sind spezielle Kenntnisse gefragt, um die Formulareingaben in einer Datenbank speichern oder per E-Mail versenden zu können.

Mit der Web Form Factory (WFF) gibt es jetzt eine Open-Source-Lösung zur Erstellung von Formularen für das Web. Die einfach zu bedienende Web-Applikation untersucht eine selbst erstellte HTML-Datei auf verwendete Eingabetypen wie Textfelder, Drop-Down-Listen, Checkboxen sowie Radio-Buttons und generiert daraus automatisch ein an PHP-gebundenes HTML-Formular. Das Ergebnis lässt sich dann in Form einer Zip-Datei herunterladen, auspacken und in die eigene Webseite integrieren.

WFF befindet sich derzeit im Beta-Stadium. Momentan wird nur die Anbindung von HTML-Dateien an PHP (wahlweise 4, 5 oder 5.1) unterstützt, was sich aber in naher Zukunft ändern soll. Außerdem ist die Erstellung eines E-Mail-Formulars möglich, bei dem die erhobenen Daten nicht in eine Datenbank fließen, sondern an eine E-Mail-Adresse geschickt werden.

Eine Hilfestellung für die Arbeit mit WFF inklusive der Erstellung der HTML-Datei bietet ein Einführungsvideo im Tutorial-Bereich.

via t3n