大橙子网站建设,新征程启航
为企业提供网站建设、域名注册、服务器等服务
本机
ssh-keygen -t rsa
– cd~/ssh
– cp-p id_rsa.pub authorized_keys2
– chmod go-rwx authorized_keys2
– ssh-agent $SHELL; ssh-add
ssh-copy-id -i id_rsa.pub uwhpsc@166.111.138.172
http://unixlab.sfsu.edu/~whsu/EXCL/656proj/tutorialopen-mpi没有p4pg
创新互联主要从事网页设计、PC网站建设(电脑版网站建设)、wap网站建设(手机版网站建设)、响应式网站、程序开发、网站优化、微网站、小程序制作等,凭借多年来在互联网的打拼,我们在互联网网站建设行业积累了丰富的做网站、成都做网站、网站设计、网络营销经验,集策划、开发、设计、营销、管理等多方位专业化运作于一体。 Getting ready for Project 5 --- MPI tutorial
--------------------------------------------
For Project 5, you will be writing a simple parallel
program to run on a network of workstations in Science 252.
This handout contains preliminary information for you to get
setup and run some examples.
SETTING UP THE MACHINES
MPI stands for Message Passing Interface. It is becoming
the standard for parallel programming. You will be running programs
using the MPI library on a network of workstations in Sci 252.
The network of PCs in Sci 252 can boot either Windows NT
or Sun Solaris. You must make sure that the machines you intend
to work on are running Solaris. The MPI installation comes with
a "tstmachines" script that should test if machines are available;
unfortunately we have not been able to get this to work. If you
can successfully "finger" a PC in Sci 252, it should be running
Solaris and available to you for running MPI programs. The server
for the network, sci-252pc20, always runs Solaris.
If you are in the lab and need to reboot an NT station
to run Solaris, go to the initial NT window and click on "Shutdown".
Make sure the option "Shutdown and restart" is selected, and
allow the machine to proceed. When you come to a screen for
selecting NT and Solaris, pick Solaris (of course). Everything
else should be default; when in doubt just hit ENTER.
For initial development, I suggest using a small number
of machines (like 2 or 3). Get that to work, then move to a
larger group. If you wrote clean code, there shouldn't be major
problems.
The common mpi commands are in /usr/local/mpi/bin. You
should add this to your path.
SETTING UP SSH (SECURE SHELL)
When you log on for the first time, you will need to
setup ssh. (Setting it up on one of the clients will get you setup
on all of them.) Follow this procedure:
prompt> /usr/local/bin/ssh-keygen
prompt> cp ~/.ssh/identity.pub ~/.ssh/authorized_keys
prompt> chmod 600 ~/.ssh/authorized_keys
prompt> chmod 700 ~/.ssh
prompt> chmod 644 ~/.ssh/identity.pub
prompt> /usr/local/bin/ssh-agent $SHELL
prompt> /usr/local/bin/ssh-add
When you run ssh-keygen, you will be prompted for a
passphrase. This is like a password; it's case-sensitive and
can include white spaces. Think of a phrase you can remember easily,
and DO *NOT* SAVE IT IN A FILE!!!!
When you run ssh-add, you will be prompted for your passphrase.
You have to type all 7 lines the first time you log in. After
that, every time you log in, you only have to type the last two
lines to setup ssh and run MPI programs.
RUNNING SOME EXAMPLE PROGRAMS
Once you're set up, copy the example programs cpi.c and
sr.c from my directory ~hsu/mpi_ex on sci-252pc20 (or any of
the other machines in Sci 252).
sr.c only involves two processors. Processor 0 sends a
message to processor 1. Processor 1 waits for this message, and
sends a message back to processor 0. Processor 0 measures the
roundtrip time for a message, for 20 send-receive pairs.
cpi.c is a somewhat cleaner version of the pi computation
program that you saw in the book chapter copies you received in
class. Processor 0 reads the value of n from the user, and broadcasts
it to all processors. Each processor does its computations, all
partial results are accumulated using MPI_Reduce(), and processor 0
prints the result and the execution time.
The times reported (using MPI_Wtime() calls) is in
1000s of seconds (when the display says 0.1, it means 100
seconds).
To compile an MPI program (for example, cpi.c), type
mpicc cpi.c -o cpi
To run an MPI program, log onto sci-252pc14, and type
mpirun -p4pg pf4 cpi
(pf4 is a file that specifies how processes are assigned to
hosts. More on that later.)
You'll get output similar to this (don't worry about the stty
error messages for now):
stty: : No such device or address
stty: : No such device or address
stty: : No such device or address
Process 0 on sci-252pc14.sci252_cs.sfsu
Process 1 on sci-252pc20.sci252_cs.sfsu
Process 2 on sci-252pc6.sci252_cs.sfsu
Process 3 on sci-252pc7.sci252_cs.sfsu
Enter the number of intervals: (0 quits) 10000 [user entered 10000]
pi is approximately 3.1415926544231239, Error is 0.0000000008333307
wall clock time = 0.000000
Enter the number of intervals: (0 quits) 40000 [user entered 40000]
pi is approximately 3.1415926536418795, Error is 0.0000000000520863
wall clock time = 0.000002
Enter the number of intervals: (0 quits) 0 [user entered 0]
P4 procgroup file is pf4.
prompt>
Your output may also be scrambled a little bit, since you have
several processes printing to the same screen, and order was not
enforced.
As we mentioned earlier pf4 gives mpirun directions on how
to assign processes to processors. The pf4 included in your examples
looks like this:
sci-252pc14 0 /Users/hsu/mpi_ex/cpi
sci-252pc20 1 /Users/hsu/mpi_ex/cpi
sci-252pc6 1 /Users/hsu/mpi_ex/cpi
sci-252pc7 1 /Users/hsu/mpi_ex/cpi
Each line is a triplet: [hostname] [#processes] [program name].
The first line indicates that the job was started on pc14; notice that
the number of processes specified for the host that started the job
is always 0. The next three lines indicate that one process is to be
started on pc20, pc6 and pc7. They will run the program
/Users/hsu/mpi_ex/cpi.
You can also put multiple processes on the same host. For
example, to put two processes on pc6 and start it from pc6, you would
make a file:
sci-252pc6 0 /Users/hsu/mpi_ex/cpi
sci-252pc7 1 /Users/hsu/mpi_ex/cpi