Are you the publisher? Claim or contact us about this channel

Embed this content in your HTML


Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog

older | 1 | .... | 12 | 13 | (Page 14)

    0 0

    Columbus creates and utilizes 2 databases, the omero4_4 db and also the columbus_webapp db. Both are backed up by a script which runs under /etc/cron.daily and are, by default, stored in /OMERO/OMERO4_4/db_backup To view details of how to backup the omero4_4 db, click here To manually backup the columbus_webapp you must first switch to the 'columbus' user account and issue the following command: $ su columbus - $ pg_dump -v -Fc -f /OMERO/OMERO4_4/db_backup/columbus_webapp-TEST.pg_dump columbus_webapp pg_dump -v -Fc -f - the arguments used as part of the pg_dump process to create the backup /OMERO/OMERO4_4/db_backup/ - the location where you want the backup to be stored columbus_webapp-TEST.pg_dump - the name of the backup file columbus_webapp - the name of the db being backed up RAH

    0 0

    The workflow below details the process of removing the main components of a Columbus installation. Note that this will not revert the system to a vanilla installation. Those package dependencies which were provided by the operating system will remain, as will the user accounts which were generated by the Columbus installation scripts.

    WARNING: This will erase ALL user data. After these steps have been performed it will only be possible to recover the data if you have an appropriate backup available.

    Removing the Columbus packages

    1) Connect to the Columbus server via PuTTY/Terminal

    2) Stop the Columbus service

    $ sudo /etc/init.d/columbus stop

    3) Delete the Columbus file repository

    $ sudo rm -rf /OMERO/OMERO4_4

    4) Access the postgres user account

    $ sudo su postgres -

    5) Delete the omero and webapp databases

    $ dropdb omero4_4

    $ dropdb columbus_webapp

    6) Exit the postgres user account

    $ exit

    7) List all installed Columbus and Acapella packages

    $ rpm -qa | grep -E 'Acapella|Columbus'

    8) Delete any Columbus/Acapella packages listed in the output of the command in step 7)

    $ sudo rpm -e --nodeps

    9) Check for any remaining packages:

    $ sudo rpm -qa | grep -E 'Acapella|Columbus'

    The output should be empty.

    10) Remove the Columbus software repository from the /etc/yum.repos.d directory (RedHat Enterprise Linux), or the /etc/zypp/repos.d directory (SuSE Linux Enterprise Server).

    The PDF copy of this technote is available for download, here:

    0 0

    These are backups of the second psql database instance, columbus_webapp. It stores all data that Columbus needs in addition to what Omero holds. For example the tables stored under the columbus_webapp include login and authentication attempts for users connecting to Columbus from a 3rd party app via the webapp, celery queue information and also contains things like the publishing status, cluster job status and the remote references for the measurements that have been forwarded to Amazon S3 during import for the cluster functionality. It’s use/relevance depends on the whether or not you are using features like publishing or cluster computing. RAH

    0 0

    Celery is a task queuing service. It's primary use is for managing image rendering and export jobs being submitted to the Columbus server via 3rd party applications e.g. Spotfire. The Celery system picks these jobs from the queue runs them asynchronously and when complete prepares a response which is then picked up by the webapp to return to a client or webpage. Celery runs as another service component, it is started and stopped via /etc/init.d/columbus which actually uses /etc/init.d/columbus-celeryd. If the celery service doesn't respond to the standard /etc/init.d/columbus script then you can call the columbus-celeryd script directly using: $ /etc/init.d/columbus-celeryd stop/start/restart/status The Celery service starts multiple worker nodes for accepting image rendering requests, the workers log information regarding those jobs to /var/log/columbus/web/columbus-images-service.log. RAH

    0 0

    Initially you should try starting/stopping the celery service using: $ /etc/init.d/columbus-celeryd (stop/start/restart/status) If the celery service is 'hung' and does not respond to either /etc/init.d/columbus stop, or $ /etc/init.d/columbus-celeryd stop then it might be necessary to kill the Celery related processes. Typically you will see more than 1 process. There's a master but depending on the activity there will be several worker processes. They all run under the columbus account e.g: $ ps f -fU columbus UID PID PPID C STIME TTY STAT TIME CMD columbus 12688 12687 0 08:01 ? S 0:01 nginx: worker process columbus 12782 1 0 08:01 ? Sl 3:11 /usr/local/PerkinElmerCTG/Columbus2.8/webapp/virtualenv/bin/python -m celery.__main__ worker --app=columbus -n c columbus 9627 12782 2 13:25 ? Sl 0:01 \_ /usr/local/PerkinElmerCTG/Columbus2.8/webapp/virtualenv/bin/python -m celery.__main__ worker --app=columbus columbus 9731 12782 0 13:25 ? Sl 0:00 \_ /usr/local/PerkinElmerCTG/Columbus2.8/webapp/virtualenv/bin/python -m celery.__main__ worker --app=columbus columbus 9751 12782 3 13:25 ? Sl 0:01 \_ /usr/local/PerkinElmerCTG/Columbus2.8/webapp/virtualenv/bin/python -m celery.__main__ worker --app=columbus You'd need to kill the master with PID 12782 but before that the slave processes must be killed, you can use pkill -P with the parent id to do so: $ sudo kill -9 -P 12782 # Now kill the master $ sudo kill -9 12782 Now if you start Celery again it should bring it back to life. If this doesn't help then there is potential for something to be broken meaning that the image rendering jobs will fail, if that is the case then check the /var/log/columbus/web/columbus-images-service.log file for errors and forward on to the PKI informatics support team for further troubleshooting advice. RAH

    0 0

    On a default Columbus+ server users have the option to submit batch analysis jobs locally or within the cluster environment.

    Its not possible to restrict users to one method or the other via the Columbus UI, but by modifying the source code its possible to remove the local analysis option, meaning users are forced to submit batch analysis jobs to the cluster.

    The following information is valid for Columbus version 2.8.2 and may differ in future versions.

    The following change must be made:


    line 429

    analysisServerOptions = vec(COMPUTE_ON_LOCAL_SERVER);

    change to

    analysisServerOptions = vec();

    Note, it is recommended to make a backup of the nav_batch.proc file before modifying. Any modifications will be reverted during the next Columbus update.

    0 0

    If the 'Insert Image Column' Dialog appears cropped and messed up like this:

    - Exit Spotfire

    - Right click Desktop ... Display Settings

    - Set Scale and layout to 100%

    - Sign out of Windows

    - Sign in and it should appear as normal.

    The inspiration for this Technical Note comes from:

    0 0


    I have a user who is getting Activation ID/Host owner mismatch error, he has emailed support ( a number of times with no response. Which is concerning.

    Can someone help?


    University of St Andrews

    0 0

    After a user logs in to Columbus the session is kept alive as long as the browser is running with a Columbus application in any of the tabs. More precisely this is for all the pages like navigation, image analysis, import, export etc, not for the admin pages or job status page. The session is killed if a user logs out (as long as there are no background jobs running under this session). The session is also killed 10 minutes after the user closes the browser (or the tabs) that displayed the Columbus app. When background jobs are running the session will not be terminated, the 10 minute timeout would be applied at the point the background job finishes. RAH

    0 0

    For servers where HTTPS support has been enabled for the Columbus login and webadmin pages, its possible to disable SSLv3 encryption and leave only TLSv1 support enabled. To do that users must add a line to the server section of the /usr/local/PerkinElmerCTG/Columbus/webapp/server/config/nginx.tmpl file to specifiy ONLY the supported protocols. See the last line in the excerpt below as an example.


    # https server for /login and /webadmin requests
    server {

    listen ${https_port} ssl;
    ssl_certificate ${https_certificate};
    ssl_certificate_key ${https_key};

    # turn off SSL encryption
    ssl_protocols TLSv1;




    The Columbus service must be restarted after making the modifications.

    $ sudo /etc/init.d/columbus restart

    Note: Any modification will be overwritten during the next Columbus update.

    0 0

    Is there an upper limit to the size of image files imported to Columbus?

    I have been unable to successfully import TIF files > 30 MB. I am using the Import Type method 'Columbus CSV' and have had no issues importing TIF files


    Columbus could not handle the request:
    Assertion failed: status
    status: 0

    /usr/local/PerkinElmerCTG/Acapella-4.1.2/AcapellaResources/AcapellaColumbusWebapp/ProcLib/columbus/navigator.proc(260) [Columbus::nav_DisplayImage]: 
    /usr/local/PerkinElmerCTG/Acapella-4.1.2/AcapellaResources/AcapellaColumbusWebapp/ProcLib/columbus/util_imageview.proc(262) [::Assert]: Assertion failed: status
    status: 0


    A possible solution to my problem may require tiling/segmenting these large image files into smaller image files?

    0 0

    0 0

older | 1 | .... | 12 | 13 | (Page 14)