Modifying Adobe Photoshop Action (*.atn) Files

Intro

Recently at work, I was asked if it was possible to modify an ATN file, replacing all occurences of a string with a different string (e.g., replacing all occurrences of "hello" with "goodbye"). I wasn't able to understand the layout of the file well enough to read all strings, but I was able to figure out two UTF-16 string formats (the ASCII format is similar to the UTF-16BE format, but I didn't need to mess with those strings, so the program does not search for them).

To use the program, specify an input file, an output file, and pairs of strings to search and replace.

java -jar modifyatn.jar [--lenient-le]
                        inputFile outputFile
                        searchString replacementString
                        [searchString2 replacementString2 [...]]
If any of the parameters contain spaces (e.g. filenames with spaces or search strings with spaces), put quotes around the entire parameter.

By default, the detection used for little-endian strings is strict. The byte counts must match expected values, and a signature "txtu" must be present. If these turn out to vary, the "--lenient-le" option can be used to force a lenient interpretation.

Example: java -jar modifyatn.jar input.atn output.atn "hello world" "goodbye world" world everyone

This will replace "hello world" with "goodbye world", and then make a second pass replacing "world" with "everyone".

Rick Ralston posted a tutorial for OS X (which should be similar under Windows) on his site "The Automatist" at http://www.theautomatist.com/the_automatist/2008/08/tutorial-editing-photoshop-actions-atn-files-with-java.html.

Download

Jar file: modifyatn.jar
Source (Java): ModifyATN3.java

Technical Details

The strings are a cross between Pascal and C style strings (DWORD length followed by the UTF-16 string with a terminating UTF-16 null included in the string and the length). Curiously, both big-endian and little-endian encodings are used. The little-endian strings are preceded by a couple byte counts that, in the files I looked at, were always 12 bytes more than the actual byte count of the string. One of the counts is big-endian, and the other is little-endian. In the files I was given, the little-endian strings were always used for file paths.

UTF-16BE (big-endian) strings
DWORD Big-endian length of string in characters, including terminal null.
WORD[] Big-endian characters, including terminal null.

UTF-16LE (little-endian) strings
DWORD Big-endian, size of string in bytes + 12.
BYTE[4] String "txtu".
DWORD Little-endian, size of string in bytes + 12.
DWORD Little-endian, length of string in characters including terminal null.
WORD[] Little-endian characters, including terminal null.

While writing this program, I tried a couple approaches, but settled on just reading the whole file as a byte[] for simplicity, and refined the program from there. The first version was actually a FilterInputStream that maintained read-ahead and read-behind buffers, with output lagging behind input (output was written as data expired from the read-behind buffer).

Copyright (c) 2008 Paul Miner <$firstname.$lastname@gmail.com>