2010-01-12

Mac OSX: shell-script to get Network configuration

If you have different "locations" configured in your network environment on mac os x, it could be useful to retrieve what environment is currently selected, to allow different behaviour (in a script for instance) depending on the location/environment.

Mac has an instruction with which you can get this informatio:
networksetup -getcurrentlocation

This command (networksetup) however has to be executed with administrative privileges. This means only a user that's allowed to administer the machine, can execute:
sudo networksetup -getcurrentlocation

A non-administrative user could be interested for this information too. And since this instruction does not change the system configuration, one could think it advisable to set the set-user-ID-on-execution bit of networksetup, using
sudo chmod u+s /usr/sbin/networksetup

However this solution would open up your networsetup for other actions. So this option seems not advisable.
One could write create a script that executes this instruction, and chmod & chown that script to execute as root:
TARGET=GetCurrentLocation.sh;
cat <<EOSCRIPT >GetCurrentLocation.sh
#!/bin/bash
networksetup -getcurrentlocation;
EOSCRIPT
chown root ${TARGET};
sudo chmod 5755 ${TARGET};


Aforementioned snippit will create a script GetCurrentLocation.sh, that does just that.

Another solution is examining /Library/Preferences/SystemConfiguration/preferences.plist
This file holds the required information.

Following script evaluates the contents of that file, and extracts what location is currently selected, without the need for any superuser stuff.

#!/bin/bash
# Filename : GetCurrentNetworkLocation.sh
# Version : $Revision: 1.1 $
# Author : Dieter Demerre
# Copyright: (c) 2010- Dieter Demerre
# Licence : (g) GPL2 (http://www.gnu.org/licenses/)
# Package : Mac Os X system scripts
# Project : Network and configuration
# History :
# 2010-01-12 ddemerre 0.1 Initial implementation
#----------------------------------------------------------------------
# Description
# script that will (try to) determine what's the current network location
#----------------------------------------------------------------------
INPUT=/Library/Preferences/SystemConfiguration/preferences.plist;

function error
{
echo ${@} >&2;
}
if [ ! -r "${INPUT}" ]; then
error "cannot read plist file ${INPUT}"
exit -1;
fi

#Get key of Currently selected Set
setInfo=$(grep -A 1 "CurrentSet" "${INPUT}");
if [ 0 -ne ${?} ]; then
error "could not retrieve CurrentSet info (${?})."
exit -2;
fi
currentSet=$(echo ${setInfo}|sed 's/[^ ]* *//;s/\/Sets\/\(.*\)<\/string>/\1/;');

#Get line number in file for (start) of definition of the selected set
notBefore=$(grep -n "<key>${currentSet}</key>" "${INPUT}"|cut -d':' -f 1);
if [ -z ${notBefore} ] || [ ${notBefore} -le 0 ]; then
error "did not find key reference ${currentSet} in input (result ${notBefore})";
exit -2;
fi

# Consider only all lines following the definition-start of the set
lines=$(wc -l "${INPUT}"|awk '{print $1;}');
tail=$(( (lines - notBefore + 1) * 1));
nameInfo=$(tail -n ${tail} "${INPUT}" | grep -A 1 '<key>UserDefinedName</key>');

# And get the string value of the UserDefinedName key.
echo ${nameInfo}|sed 's/[^ ]* *//;s/<string>\([^<]*\)<\/string>.*/\1/;';

Labels: , ,

2010-01-02

Airport Extreme, loosing WAN (adsl2 bridged) unnoticed on regular basis

in 2007, I bought an Apple Airport Extreme access point. This is connected with an ADSL modem to my ISP.
I discovered that at irregular intervals, the Internet connection would be lost, but the airport extreme would not discover this (the green led continues staring at me defiantly, even though the logging in the airport extreme would also indicate a line stating the apple ntp server could not be used to synchronize the time (whereas this synchronization is logged working fine some time before [logging states: Severity:5 Clock synchronized to network time server time.euro.apple.com (adjusted +0 seconds).]).


When I switched to ADSL2, these recurring hickups became more regular. Whenever these connection droppings occurred, a reboot (using the airport utility) of the base station (menu item "Base Station", option "Reboot") would fix the problem, after some reboot time (where the switch and airport functionality temporarily becomes unavailable (so network shares are lost, opened network files closed)) the network and the internet connection would become available again.
When replacing the airport extreme with a mac mini (and dhcp client on the mac mini activated), the internet connection (adsl2) becomes available, and is not dropped (for at least 6 days (2 test runs)).
Convinced this experience prooved the airport extreme was faulty, I contacted the apple store. There someone told me that the airport extreme must be faulty, but since the warranty period is over, the price for a repair would equal the price for a new device.

I bought myself a new time capsule, and replaced the airport extreme by it. The Internet is connected to the ADSL2 modem configured with bridging. The Time Capsule is configured to receive a DHCP response at WAN (which in fact is a fixed Internet Address). The Time Capsule also manages the local wifi with hidden ESSID and WPA2 encryption. The dhcp server in the time capsule provides configured addresses to potentially 5 LAN UTP clients, and 7 WIFI clients. Access control lists based on MAC address protects one small step further.

Although I'm happy with the increased switch speed (now 1000/100/10 instead of the 100/10MB), and the included 1TB time machine, the new machine displays the same unwanted behaviour. After about 2 to 4 hours, the Internet connection is lost without the access point discovering this loss. The access point A testrun again with the mac mini shows the mini not loosing connection (the mini however is not providing dhcp addresses to the intranet, which the extreme is).

I now run a script that at an interval tries to contact 5 websites. If all 5 consecutively fail to be contacted, an apple script is launched that will reboot the base station. These actions are logged, and show me the regularity of the failures:


20100101 165524 lost
20100101 170105 reboot
20100101 170556 OK
20100101 200953 lost [3h4m]
20100101 201533 reboot
20100101 202024 OK
20100101 220443 lost [1h45]
20100101 221009 reboot
20100101 221500 OK
20100101 234550 lost
20100101 235131 reboot
20100101 235623 OK

20100101 010000 suspended script
20100102 120000 launched script again

20100102 134558 lost
20100102 135138 erboot
20100102 135633 OK
20100102 161055 lost
20100102 161619 reboot
20100102 162107 OK
20100102 184857 lost
20010102 185437 reboot
20100102 185924 OK

This rebooting however is just a fixer,... always clears the logfile of the airport.

Labels: , , , , , ,

2009-01-20

LaCie 5big network

2009-01-16


Just got me a nice Lacie 5Big network disk.
7.5 TB (5 times 1.5 GByte Seagate Barracuda), configured in Raid5, delivers 5.4 TB diskspace.

This system is running a linux, with only interface, a webpage to configure the system.
Can export shares through smb, afs, ftp and http.

plus:

  • runs very silent.
  • relatively easy to configure.
  • very nice design

minus:

  • very limited configurability
  • Users and groups can be defined to determine access control, however, user passwords are limited to 8 chars in the web-interface provided, even though the underlying system *does* take md5 passwords.
  • web page that allows for definition of an e-mail address for the administrator, limits length of e-mail to something as 30 chars
  • *very limited* share configuration: define shares and some access control to it. That's it. No possibility to "finetune" the shares, like creating sub-shares (a share to a portion (subtree) within another share)
  • no possibility to move files from one share to another on system-level... which makes it a pain (for large volumes) to move them from one share to another.
  • The feature to make a snap-shot of an attached USB-disk, produces a new share (each time), but there's no way to check the status of the copy process
All these software remarks would be easily solved if given some direct (shell) access to the system.
  • Comes with an external power adapter brick, that reduces the design-factor by let's say, 100. But might be understandible, it's quickly replacable (in my experience with LaCie power adapters that's a plus), and reduces head production *in* the pretty box.

updated 2009-01-23


I upgraded the system on 2009-01-22 to the recently published firmware (2.1.2.rc4, dated 2009-01-15). Since there's no documentation provided on what the firmware fixes (or not), one can not decide for oneself whether the update brings something you want, or fixes something you have (or have not) as a problem... Only after upgrading, you get to see what's in the package.
Now after the upgrad of the system, the (webpage)console requires you to reboot the system. Which I did. When rebooting, all my (already) filled afp-shares were present, BUT
  • my main share ("share") holding about 2TB of data already, had only first level (root directory) present. In the subdirs, nothing present, no dirs, no files. The webpage "browse" however still shows the content of the shares as they should be.
  • another share (having been created by the "automatic copy of a usb-disk") has the complete contents present, but permissions for the complete share (when mounted) are all set read-only.

In communication with the rather quick and helpfull belgian helpdesk (through web-ticket communication), I discovered en plus that when "removing" a share, the files remain present on the disk (are not removed as well), but the content of the (prior share) becomes unavailable, untill the share with the same name is created again. Bummer if the share was made using the copy-complete-usb-disk feature, that adds some random number after the snap-share...
If you removed that share, your disk stays filled, but you can't access it anymore. I think at least a warning should be given when removing a not-empty share, about the "trailing occupation" of the disk...

Current Status


Ok, data 's on it, and I bought the thing, but currently, for future investments, I'm looking at something like the QNAP TS-639 Pro NAS which provides a vaster amount of functionality for (about) the same price.

2009-01-16

google calender,... event repeat until never, limited to 366

In google calendar faq one can find that events iterations "currently" are limited to 366.
You can create an event to repeat until never, however.
This event will "stop" to be in your google calender 366 iterations AFTER the first instance. If you have a repeating event every working day (like the event "working hours") supposedly never to end, the event stops displaying some 510 days (a year and some months) later (510 - 74 weekend days).

BUT
If you sync that calender (with that event) to iCal (for instance), the event does NOT stop (in iCal) after the expiration date....
So on my google calendar I added a new event (copy of previous), never to end, to be started the day the previous event "disappeared".
After a refresh of my iCal, I found the event twice. Apparently the never-to-end repetitive event *is* exported as never to end, only in google calendar, it stops after these limit iterations.

Workaround:
make the original event "stop" repetition the last day that was reachable, and make the next one start the first now empty.

e.g.
start working days 2007-01-02 .... until 2008-05-26
new event (copy original) start 2008-05-27 ... until (never)... will see when it disappears again.

2007-09-17

secure synergy

now my previous post about keyboard and mouse sharing over network, is incomplete. Being a bit paranoid I do not like my keyboard-events, my clipboard and other data to be passed juste clearly over the network, so I added an ssh-layer:


  • installed cygwin with openssh on the windows machine.

  • put the public ssh-key of my user on laptop into authorized_keys of desktop

  • on laptop: ssh desktopuser@desktop -L24801:localhost:24800

  • on laptop: synergyc localhost:24801



Now I can use this process (setting up ssh-tunnel, and running synergyc), so the configuration (at home, with desktop being iMac) can accept laptop, and without changes to the laptop, mouse of iMac might work too.

The phase of setting up the ssh-tunnel could try to discover what environment it is in (based upon ip-address received from dhcp-server, OR based upon successfull reaching that ssh-server):

I adapted the (in my earlier post introduced synergyc_start script into:
#!/bin/bash
while /bin/true; do
for host in worklogin@desktop-work homeuser@imac-home; do
ssh -L24801:localhost:24800 -f ${host} sleep 5;
[ ${?} = 0 ] && synergyc --no-restart --no-daemon localhost:24801;
done
done


So now that works fine, but hey, I don't want to add the root@laptop public-key to my authorized keys of my user@desktop.
I changed the script further, using a.o. screen to run programs (in background) but allowing later access to their console.
My new syntergyc_start has been extended to allow root-invocation, but root will execute the ssh-command as sudo -u user:
I also added an option so the command can be executed with argument screen to retrieve the screen on console. The screen is also used to check whether there's an existing command running already.

#!/bin/bash
# will start proxy-ssh-command in detached screen.

CLIENTNAME=laptopname;

if [ -z "${DISPLAY}" ]; then
echo "no DISPLAY variable set" >&2;
exit;
fi

if [ "$( id -n -u )" = "root" ]; then
SUDO="sudo -u laptopuser ";
screenname=root_synergy_proxy;
else
SUDO="";
screenname=user_synergy_proxy;
fi

case "$(hostname)" in
(${CLIENTNAME}|${CLIENTNAME}\.*)
proxycommand="while /bin/true; do
for host in DT1user@desktop1 DT2user@desktop2 DT3user@desktop3; do
${SUDO} ssh -L24801:localhost:24800 -f \${host} sleep 5;
[ \${?} == 0 ] && synergyc --no-restart --no-daemon localhost:24801;
done;
done;";
;;
(*)
# only allow invocation on configured machine.
echo "this script should only run on ${CLIENTNAME}.">&2;
exit;
;;
esac

#remove possible defunct screens
screen -wipe

#check for existing (running) screen
screen -list|grep -e '\<[0-9]\{1,\}\.'${screenname}'\>' >/dev/null 2>&1;

case "${?}" in
(0) # depending on existing screen, retrieve it (if requested)
[ $# -eq 1 ] && [ "$1" = "screen" ] && screen -dr ${screenname};
;;
(*) # launch the screen instruction (with screen on console if requested)
[ $# -eq 1 ] && [ "$1" = "screen" ] && resume="" || resume="-d -m";
screen ${resume} -S ${screenname} bash -c "${proxycommand}";
;;
esac;

Labels: , , ,

keyboard and mouse sharing over network

hmmm,

I'm having a laptop and a desktop, and I want both to be controllable by one set of keyboard+mouse. I discovered synergy (client existing for Microsoft Windows, linux, mac os x, ...(?)). You can find it at sourceforge if you don't find it with your distribution.
Now I configured my laptop to be client, my desktop to be server. The server accepting connection from my laptop.
Now I can access my laptop using the keyboard & mouse of my desktop for both.

Configuration of my windows (Server):

  • Start synergy (on windows machine)
  • select "share this computer's keyboard and mouse"
  • press "configure" (see image right).

  • press the + button in the screens-section (once for all machines you wish to configure, add the server like this AND all the clients.
  • in the "Links" section, configure where which display is "logically positioned", so when leaving the display (using the mouse-cursor) at one side, where should it enter (at what side).
    Also provide (if needed) a "return"-link:
    (DON'T THINK that if you configure to leave machine-A through your left-screen-side for the right-screen-side of machine-B, you'll automatically configure the right-side of machine-B to return to the left-side of machine-A, because you don't).

configuring the client:

  • Now you don't have to configure your client, only to launch it and tell it what server to connect. [code]synergyc server[/code] (see man synergyc for extra options).


Now having it working, I set the synergy software to launch automatically

Windows:

  • server to launch automatically (using the "autostart" button on the windows synergy application).


Linux (Ubuntu Feisty Fawn):
implementing suggestions on (amongst others Ubuntu-forums, I also launch synergyc automatically).

  • I created a script synergyc_start with the instructions to execute:
    #!/bin/sh
    synergyc servername

  • and added an invocation of that script in /etc/gdm/Init/Default, before the sysmodmap=-line.
  • and added another invocation of that script in /etc/gdm/PreSession/Default before the XSETROOT=-line.

Labels: , ,

2007-06-13

Changing MAC address for ethernet card (in linux)

I have a program that checks its license with the MAC address of the (an) ethernet adapter.
Now the ethernet card was fried (last friday, lightning stroke in the neighbourhood and not only the NIC (Network Interface Card) was fried. Luckilly not the harddisk.
So now I replaced it (and the power supply), but - of course - the license would not register.
Now in Linux you can - if the driver supports it - alter your MAC address:
sudo ifconfig eth0 hw internet 00:01:02:03:04:05
where you set as eth0 the NIC reference and as 00:01:02:03:04:05 the MAC address you want your NIC to use henceforth.

Ah well, I found an nice page with an overview of how to do 'it' in different OS's (see link under title of this blog-item).

Labels: , ,

2007-05-24

ScanSoft RealSpeak 3.51 for Linux - problem with Debian ETCH (ext3 ?)

hi,


I updated my (quick-and-dirty) post originally called "Text To Speech (ScanSoft RealSpeak)". Hoping it would become less obscure.


Today I encountered some problems with another installation (being Debian ETCH, kernel 2.6.18-4, with root ext3)... for which I found a workaround.


It seems that ScanSoft RealSpeak 3.5.1 does not function (anymore) in this distribution. The standard application (as used in aforementioned Posting), "hangs" just after outputting "Initialize".


When using strace it looks like one of the internal functions of RealSpeak looses itself in a loop... always continuing looking into (digging deeper and reaallly deeper!!!) into directory "." (so actually it's not digging deeper, ... but it's not comming back anyway).


This is probably caused by the (erroneous ? obsolete ?) assumption of the (a) programmer that a readdir(3) (or but probably not getdents(1)) will return its first two items being "." and "..". Thus not needing to "check" them out, and jumping to number three.

The code seems to go down into all directories of the given engine directory.

But if the result is not ordered with these . and .. upfront, but on some other position 3 (or 4), these special directory entries seem to be treated like any regular directory and descended into... And if - like all good stories - the story repeats itself (very probable if e.g. we're examining . as a regular directory, thus descending into it, actually staying put), this could take a while...

Here some logging:

open("/opt/scansoft/tts/engine/../api/server"...
...
open("/opt/scansoft/tts/engine/../api/./server"...
...
open("/opt/scansoft/tts/engine/../api/././server"...
...
open("/opt/scansoft/tts/engine/../api/./././././././././server"...
...


This snippet can be explained thus:

First when examining engine directory, getdir returns .. on some later-than-second place. This directory is treated as any other directory and since the code is - probably - descending recursively into all directories, looking for all languages available for instance (?), the code examines that directory.

Then Apparently getdir of that directory .. directory (actually the directory /opt/scansoft/tts) returns directory api (being absolutely /opt/scansoft/tts/api, but by the faulty procedure now considered part of the engine-tree.

Then second Murphy: the directory api apparently ahs it's . reference as first directory entry, (later-than-second place), so the anxious procedure is going there,... . But now we arrive absolutely again in the same spot (directory api, which again gives . as directory.... return the same result, and we're off for a nice long trip into ././././././././././... and so on


Proof for(or making understandable/acceptable of) this assumption ?

Well, dir -f returns the list of files in order of the filesystem, so I executed that:
and got:
$ ls -af /opt/scansoft/tts/engine
libicudata.so.22 tts4sml.so libicuuc.so.22 dec_encrypt.so ttsengine.so ..
dub . headers libxml4c.so.50 xlit_1252.so


On other distributions/versions/systems (where the ScanSoft software does work), this instruction returns:
$ ls -af /opt/scansoft/tts/engine
. dec_encrypt.so headers dub libxml4c.so.50 xlit_1252.so
.. libicudata.so.22 tts4sml.so libicuuc.so.22 ttsengine.so


P.S.

Thanks to my collegue Erik Devriendt for helping me finding the origin of the failure.


Solution


A clean solution to the problem would be a bugfix by ScanSoft (Nuance now) seems to be in its place...


A workaround has been composed:


In the process of finding a workaround, I mounted (using sshfs) the engine-directory of a working system over the engine directory of a non-working, and the system functioned...
More fine-tuning of this discovered that especially the header directory of the working system was important. The other files and directories could be used within the local system (using symlinks).


Anyway... In that process, I tried to compose an image-file and mounting it through loopback. Then I discovered that when an ext2 image was extended with a journal (tune2fs -j /dev/loop123) to ext3, and subsequently mounted, the directory got scrambled again. Mounting as ext2 did not solve this.


But when using an (ext2) image mounted on the engine directory, the Initialization procedure indeed continued (did not loose itself in that nasty recursion), but apparently the system then did not find the required files..., because the Initialization returned with Error 23 (in api/inc/lh_err.h, we find TTS_E_NO_MATCH_FOUND 23 . This is the same behaviour as when you request an unknown language-code...


In a last desperate attempt, I created an ext2-image (205M), mounted it as the /opt/scansoft and reinstalled all (rs-api and rs-<lang> packages).


This worked !!!


Although this system now works, I think it more safe just to use another distribution, c.q. version that does not have this problem.

Procedure for workaround:


In this procedure, we assume it being executed by a user with sudo privileges on dd, losetup, mkfs.ext2, mount, mkdir, dpkg, OR execute it in the unsafe way: as superuser.

  1. Create the image-filesystem (first on arbitrary location)

    $ img=/opt/scansoft.img;
    $ dev=$(/sbin/losetup -f);
    $ sudo dd if=/dev/zero of=${img} bs=$(( 1024*1024 )) count=205;
    $ sudo /sbin/losetup ${dev} ${img};
    $ sudo /sbin/mkfs.ext2 ${dev};
  2. mount it and install the software:
    $ mnt=/opt/scansoft;
    $ sudo mkdir -p ${mnt};
    $ sudo mount ${dev} ${mnt} -t ext2;
    $ sudo dpkg -i rs-*.deb;
  3. test the directory-order in the image:
    $ ls -af ${mnt}/tts/engine/headers
    and (at least I did) revel at the result:
    .   dec_encrypt.so    headers     dub             libxml4c.so.50  xlit_1252.so
    .. libicudata.so.22 tts4sml.so libicuuc.so.22 ttsengine.so
  4. Now finaly, test the standard test-program:
    cd /tmp;
    echo "This is a simple test">test.txt;
    /opt/scansoft/tts/api/demos/standard/standard 0 0 /opt/scansoft/tts/engine test.txt;

    You should now see
    Initialize
    Process
    Uninitialize
    after which the current directory should hold a standard.pcm file (of filesize > 0).

To allow automatically (and user-requested) mounting (c.q. unmounting) of the ScanSoft system, we added a line to /etc/fstab:
/opt/scansoft.img /opt/scansoft ext2 loop,ro,users,exec 0 2

Labels: , , , , , ,