Here are the numbers for an optimal performance.
- Storage capacity is not limited because it uses the google cloud storage bucket.
- Maximum number of out-of-box users is 2K (scalable to 100K+ for enterprise deployments)
- Maximum number of simultaneous UI sessions
- 50 for single VM based Application.
- 100 for Kubernetes based Application (scalable as the cluster grows)
- Maximum number of concurrent SFTP connections is 50 (Contact us to scale to the higher connections)
Yes, of course. All you need is to keep the subscription alive i.e. billing account should be active.
Your initial password is located on the compute engine meta-data settings. The name of the meta-data is ADMIN_PASSWORD.
The admin is the super administrator of the application. The admin user does not have an SFTP account. The SFTP accounts belongs to the users which the admin creates
The installation folder is located at /opt/trillo
The email of every individual user needs to be unique. You can specify any email address which is unique across your users. But if you would like this email to be used to recover a forgotten password, then it must very well be a valid mail address. Also, if you specify the correct email address at the time of account creation, then there is an invitation email sent to this user asking to join. They can then set their passwords. No other information is transmitted via email.
It will always be "/Home" under which a user can create directories in files. Currently, it is not possible to change the default home directory of the user. It remains to be the same on the SFTP. If an inbound folder needs to be created, it should be created inside the home like ./Home/inbound
A group folder is essentially a shared drive where designated users can share files and folders. This group drive is mounted under individually selected users no matter where they log in. The mounted folder can be found under ./Group-Folders
A group folder cannot be deleted, unlike a general folder, because it is essentially a mounted drive.
Inside the application there is a menu item to check the logs. On the this page you will be able to see the transfers along with the timestamp. In case, you would like to monitor and analyze such audit logs system-wide then connect to the local mysql server using the credentials provided in the docker-compose.yaml located in the installation folder.
Yes, it is possible to transfer the file managers server name to your own DNS. Please contact us at [email protected] and have your DNS account ready. It will take a couple of hours to do the transfer which we will do with time and material (T&M).
The supported mode is to use either SFTP (no SSH/SCP) or WebUI. If you would like to transfer a large chunk of files then the best way to achieve is to use the upload utility (gsutil) provided by the Google cloud platform.
Every SSH public key has three sections separated by white spaces. Please paste the complete key.
When SFTP VM runs that in GCP it may create GCP system wide SSH user accounts automatically. if you are creating a user with user-id which matches with auto-created VM accounts then SFTP login will fail. All other non-clashing SFTP will work fine.
Application binaries can only be upgraded by the DevOps of GCP project. Trillo has no backdoor with which the application binaries can be updated. The procedure of the upgrade is mentioned under the maintenance help page. However, Trillo can inquire about the license status and software version from the application if need be.
The system shell scripts are listed publically under following gcs bucket folder. These scripts are remotely run by FileManager App server on the SFTP VM OS as a super-user. gs://trillo-public/fm/ga/scripts
Yes you can achieve this manually.
In our Kubernetes Marketplace solution it is possible to specify your storage bucket. Only one storage bucket is required.
That is so no such limit.
Yes you can access it in our gke marketplace solution.
Our VM marketplace solution is fixed cost. For the scalable solution we recommend using kubernetes marketplace app. The scalable solution is based on the number of containers running at any time.
yes it can be done globally only once since it impacts all the users that are mounted after making those changes. therefore plan ahead otherwise your previous user will not have the benefit of this global settings. this change can be done through the scripts that are located under the /gcs/system folder. for the details can be provided by writing email to us.