I have a script that runs on exchange servers to lookup accepted domains and perform an MX lookup for each of the domain.I then want to write these results to a custom field for each agent, however often the results are more than 100 characters. Is there any way to expand the size of this field or is there somewhere better to store these results.
Almost every server I monitor accepts multiple email domains and need to be able to filter and report on the results.
Any help would be greatly appreciated
Do you really need to report on the entire value? If not you can then always type/pipe it out into a file and then use a 'GetFile' to bring the list up as a complete file, thats much more scalable for large amounts of information.
That way if you only need to report on a subset of the information, you can filter that at the machine first in a scripting language, and only send just the minimum required data back up to the field. If you could share an example of the data that would help.
I too have this problem and have I a solution I would like to share. Disclaimer this method isn't Kaseya approved so please use at your own risk. If you would like for Kaseya to actually change the product to be more functional, please search for and up vote one of the many feature requests asking for this limitation to be removed / increased.
1. On the Back end server, Open SSMS and execute a new query. This will remove the 100 char limit on the column that stores custom field data. You would think this is it and all you need to do, but the "Update System Info" command in agent procedures also has a 100 char limit imposed. I do not know where that is defined or how to change it so we will need to find another way to skin this cat <enter SQLWrite command>.
Use KsubscribersALTER TABLE auditRsltManualFieldValues ALTER COLUMN FieldValue VARCHAR (max)GO
2. Run the query below, locate the ID value for the custom field where you want to store more than 100 char of data.
select * from auditRsltManualFields
3. On your Front end server, navigate to "c:\Kaseya\xml\Procedures\AgentProcSQL\0\SQLWrite". If you do not have one, create a file called SQLWrites.xml, otherwise open it for editing. You will want to add the section of XML below. I am including the whole XML in case you don't have a SQLWrites.xml file. Update XXXXXXXXXX with the ID you got in step 2. Note I am passing data into SQL using the variable #global:sqlwrite#. You can change this to whatever you are using in your Agent Procedure.
<?xml version="1.0" encoding="utf-8" ?><queryList> <queryDef label="NoLimitSQLWrite" sql="update ksubscribers.dbo.auditRsltManualFieldValues Set FieldValue = '#global:sqlwrite#' Where fieldnameFK = 'XXXXXXXXXX' and agentguid = '#vMachine.agentGuid#'"/></queryLis
4. To use this in your agent procedures, you should first run the command "update system info" select the field where you want to store the data and set it to NULL (or anything really, it doesn't matter). You do this because, if the agent has no data at all in any custom fields the agent's agentguid is not in the auditRsltManualFieldValues table. If its agentguid isn't in the table, the SQLWrite NoLimitSQLWrite command will fail (because its doing an update, not an insert, this could probably be improved upon by creating a stored proc that has logic to insert vs update, but just null'ing it out in the AP works just fine and is less work). Next, you need to use a "get variable" command and set the value for the global:sqlwrite variable. Finally add the command "SQLWrite" and select the "NoLimitSQLWrite" operation (or whatever you named it in step 3).
I really hope this helps someone else out. Please use at own risk. This is almost certainly not supported by Kaseya and storing large amounts of data in custom fields can lead to performance issues or unforeseen errors elsewhere. You have been warned.
Thank you for the reply, It would have been good if there was an easy way to expand this field.
However I have changed the output from the script to return just a summary of the results and our technicians can investigate further if they want more information.
Hi Brad, yes I totally understand, and agree that would be nice.
The challenge we have is that because you can add as many fields if you want, if they can contain a large amount of data it creates a large amount of complexity. The custom fields are exposed throughout the product, in the Views, in Service Desk, in the reporting. Too many large fields and the overhead added to internal SQL queries all throughout the product can start to add an undue burden. Its a case of trying to balance flexibility and customisation with the constant demand for better performance.
As the product continues to expand and improve on scalability and performance (remembering we are the most scalable system of our type on the market today), we will find new ways to address the challenge of storing custom data in the DB and exposing it in various ways.
Thanks for your input.