To take part in discussions on talkSFU, please apply for membership (SFU email id required).
Advanced computer question
Do you know if it's possible to connect 2 ftp servers to transfer files in between them?
For web development purposes I usually use an FTP client like CuteFTP or FileZilla but that only works for transferring files from the local drive to the server drive. Basically I need to transfer a butt load of files from one server drive to another server drive via FTP.
For web development purposes I usually use an FTP client like CuteFTP or FileZilla but that only works for transferring files from the local drive to the server drive. Basically I need to transfer a butt load of files from one server drive to another server drive via FTP.
Comments
SSH into a server. (Use PuTTY if you're running Windows.)
Login with your username/password
Use the FTP client on that server to connect to the FTP on the other computer. The command should be something like "ftp the.other.computer" or "ftp the.other.computer " if it's not running on port 21.
Input your username/password for the other FTP
Transfer files using the GET and PUT commands.
Option 2.
Transfer the files from one server, to your computer, to the other server.
I'm thinking about switching hosts and I would like to avoid Option 2 because of the sheer size and number of files I have on the server.
If this something you plan to schedule and if linux is your platform and your shell scripting is serviceable then I would write a script that gathers the contents of the directory(ies) you are copying over to the other server and then recursively iterate through them and copy the data over. Afterwards, if you need to schedule this then make a cron job within the crontab file that runs this script and then it will run /hourly/daily/weekly/monthly, whichever you choose. If it is not necc. to schedule then just run the file whenever you wish.
The command required is "ftp" and if it is a client initiated request then you would use the passive command for passive mode (used for client initiated data transfer).
For more information see:
http://linux.die.net/man/1/ftp
In Windows:
You may be able to do the same thing, although, I am not as familiar with the windows version of the linux ftp command. In any case the only difference is the scheduling mechanism. Windows has a graphical scheduler you can use that amounts to the same thing as the crontab in linux.
Let me know if any of this helps. There may be features within certain visual ftp programs that can "batch" this for you. In that case you're golden because all the nitty gritty is done for you.
EDIT:
However, it seems that you want to initiate a data transfer between computer 1 to computer 2 while sitting at computer 3 with no data temporarly being stored on computer 3. Most ftp programs are client/server oriented(as you were saying above) and you are looking to link two remote computers from a local computer and not linking a local and a remote computer.
It would simply be easier to log on remotely to one of the servers making the request and write the script and then schedule it(if necc).
here is the link for more info:
http://www.unix.com/shell-programming-scripting/9174-recursive-ftp-here-last.html
------------------------------
#! /usr/bin/ksh
#
# HardFeed -- Harvest A Remote Directory via
# Ftp Enhanced Exhaustive Duplication
#
# Perderabo 11-23-02
VERSION="1.1" # 03-16-04
USAGE="\
HardFeed [ -v | -s | -d | -r | -f | -m | -p password-file
-l list-command | -x ftp-command ... ] system user [directory]
use \"HardFeed -h\" use for more documentation
-v (verbose) Print stuff during run
-s (symlink) Attempt to duplicate any remote symlinks
-d (directory)Attempt to duplicate any remote directories
-r (recurse) Attempt to descend into any directories and process them
-f (freshen) If remote file is newer than local file, overwrite the local
-m (mode) Attempt a chmod on files and directories that we create
-p (password) Specify a file that contains the password in plaintext
-x (extra) Specify a command to be sent to the ftp client
-l (listcmd) Override the choice of \"ls .\" to get a remote directory"
DOCUMENTATION="HardFeed Version $VERSION
$USAGE
HardFeed copies all of the contents of a remote directory to the current
directory using ftp. It establishes an ftp connection to the remote
site and it uses the ftp command \"ls\" to get a listing of the remote
directory. The two required parameters are the remote sytstem and the user
name. The optional third parameter is the remote directory to copy. The
default is just the home directory of the ftp account.
HardFeed will prompt you for the password. This is very secure but it isn't
any good if you want to run HardFeed automatically. You can set the password in
the environment variable HARDFEED_P as an alternate. HardFeed will set an
internal variable to the password and then clobber the variable HARDFEED_P,
since on some systems, the environment of another process can be displayed.
With most shells, you can also set an environment variable for one command
only, like this: \"HARDFEED_P=xyzzy HardFeed -dR ftpxy joeblow sourcedir\".
A second alternative is to specify a \"password file\" with the -p option.
Such a file contains, in plaintext, the password. HardFeed will read the file
to get the password. You must decide which option makes more sense in your
environment.
Only files are examined. If we don't have a copy of the remote file, we
will get it. HardFeed will never overwrite an existing file system object
with one exception. If you specify -f and we have both a remote file and a
local file, the timestamps are compared. If the remote file is newer, a
retrieval attempt will be made. The local file must be writable for this
to succeed. For the timestamp compare to work, you and the remote system
must be in the same timezone. (You can vary your environment to make this
true.)
Normally symbolic links are ignored. But with -s, we will attempt to create
a symlink with the same link data. Even with -s, we will never overwrite
any existing object with a new symbolic link. You will need to review any
symlinks created and probably correct them.
Normally, directories are ignored. If you specify -d, HardFeed will attempt
to create the directory locally. But again, it will never overwrite an
existing object to create a directory. If you specify -r, HardFeed will
attempt to recurse into a directory and process all of the files there. If
you use both -d and -r, it will copy an entire directory hierarchy. But you
can leave off -d and only pre-create a few directories if you want.
HardFeed will attempt a chmod of any file or directory that it creates if you
specify -m. It will try to match the mode of the remote object.
HardFeed operates by establishing a co-process to the ftp command. Normally,
the output from the co-process is sent to an un-named file in /tmp and
discarded. If you want to capture this output, connect a file to fd 3 and
HardFeed will use it for this purpose. From ksh the syntax is 3>file. You
can also do 3>&1 to see it real time during the run if you really want.
You can make HardFeed send the ftp co-process some extra commands after the
connection is established with -x.
HardFeed gets a directory listing by sending a \"ls .\" command to the server.
Some servers will list dot files with this while others won't. You can use the
-l option to change the command if your server needs a different one to do want
you want. -l \"ls -al\" is one example that I got to work with unix.
For a microsoft ftp server, I had some luck with:
-l \"ls -la\" -x \"quote site dirstyle\"
Note that everything is transferred in binary mode. -x ascii will switch
everything to ascii mode. HardFeed supports embedded spaces in filenames. User
names may be long and contain slashes. All of this may make it somewhat usable
with microsoft ftp servers."
IFS=""
#
# If the password is coming in via the environment, save it in
# a local variable and then clobber the environment variable
unset PASSWORD
if [[ -n $HARDFEED_P ]] ; then
PASSWORD="$HARDFEED-P"
HARDFEED_P='********'
fi
#
# Parse Command Line
#
set -A OPT_CMDS_LIST
OPT_DIRCMD="ls ."
OPT_VERBOSE=0
OPT_SYMLINKS=0
OPT_DIRECTORIES=0
OPT_RECURS=0
OPT_FRESHEN=0
OPT_MODE=0
OPT_PASSWORDFILE=""
OPT_CMDS=0
error=0
while getopts :vsdrfmhp:x:l: o ; do
case $o in
v) OPT_VERBOSE=1
;;
s) OPT_SYMLINKS=1
;;
d) OPT_DIRECTORIES=1
;;
r) OPT_RECURS=1
;;
f) OPT_FRESHEN=1
;;
m) OPT_MODE=1
;;
h) echo "$DOCUMENTATION"
exit 0
;;
p) OPT_PASSWORDFILE=$OPTARG
if [[ ! -f $OPT_PASSWORDFILE ]] ; then
echo error $OPT_PASSWORDFILE is not a file
error=1
fi
;;
x) OPT_CMDS_LIST[OPT_CMDS]="$OPTARG"
((OPT_CMDS=OPT_CMDS+1))
;;
l) OPT_DIRCMD="$OPTARG"
;;
?) print error argument $OPTARG is illegal
error=1
;;
esac
done
shift OPTIND-1
if ((error)) ; then
echo "$USAGE"
exit 1
fi
if [[ $# -ne 2 && $# -ne 3 ]] ; then
echo "$USAGE"
exit 1
fi
SYSTEM=$1
USER=$2
DIRECTORY=$3
[[ -z $DIRECTORY ]] && DIRECTORY=.
#
# Read password file if one is supplied
if [[ -n $OPT_PASSWORDFILE ]] ; then
read PASSWORD < $OPT_PASSWORDFILE
fi
#
# Request password if it didn't come in via env or file
if [[ -z $PASSWORD ]] ; then
print -n password -
stty -echo
read PASSWORD
echo
stty echo
fi
#
# FD 3 will be the transcript of the ftp co-process. If the user
# supplied a file for this, we will use that. Otherwise it will go
# to a nameless file in /tmp
if print -u3 " Transcript of the ftp co-process for HardFeed" 2>/dev/null ; then
LOGFILE=""
else
LOGFILE=/tmp/HardFeed.log.$$
exec 3>$LOGFILE
rm $LOGFILE
fi
#
# Max time to wait for arrivial of file. This is a long time. During
# an interactive run, the user can use SIGINT if it seems to be taking
# too long. This max is intended to assure that a cron job will not
# hang forever.
OPT_MAXWAIT=15
TIMEOUT=/tmp/HardFeed.timeout.$$
#
# Various other initializations
LEV=0
date "+%Y %m" | IFS=" " read THISYEAR THISMONTH
((LASTYEAR=THISYEAR-1))
STARTPATH=$(pwd)
set -A DIR_FILE_NAME
set -A DIR_LINE_NUM
#
# Function to convert month to numeric
conv_month() {
typeset -l month
month=$1
case $month in
jan) nmonth=1 ;;
feb) nmonth=2 ;;
mar) nmonth=3 ;;
apr) nmonth=4 ;;
may) nmonth=5 ;;
jun) nmonth=6 ;;
jul) nmonth=7 ;;
aug) nmonth=8 ;;
sep) nmonth=9 ;;
oct) nmonth=10 ;;
nov) nmonth=11 ;;
dec) nmonth=12 ;;
*) nmonth=0 ;;
esac
echo $nmonth
return $((!nmonth))
}
#
# Function to determine if a file system object exists
#
# neither -a nor -e is really portable 8(
exists() {
[[ -f $1 || -d $1 || -L $1 || -p $1 || -S $1 || -b $1 || -c $1 ]]
return $?
}
#
# Function to wait for a file to arrive
waitfor() {
wanted=$1
if ((OPT_MAXWAIT)) ; then
((GIVEUP=SECONDS+OPT_MAXWAIT))
else
GIVEUP="-1"
fi
while [[ ! -f $wanted && $SECONDS -lt $GIVEUP ]] ; do
sleep 1
done
if [[ ! -f $wanted ]] ; then
echo "FATAL ERROR:" timed out waiting for: 2>&1
echo " " "$wanted" 2>&1
echo
print -p bye 2>/dev/null
exit 2
fi
return 0
}
just used some simple programs like flashfxp, was pretty simple
connect to server 1, server 2, then drag and drop interface b/w them
and here is a link to software that may do just that:
http://www.smartftp.com/ftplib/download/
It supports fxp and it may be the easiest way to pull this off.
2 mount the remote directory locally on your computer
3 download files from server1 using any ftp client you like.
^_^
BTW i always thought that ftp is misused in your cases.......if you needs a transparent (which means you only want to abstractly save a file) you'll need remote filesystems like NFS/AFS. Ftp is a transfer solution rather than a storage solution.
fxp may solve this problem in a much simpler manner, however, your solution is probably the one I'd use in a linux environment.