Multithreading KIE Workbench - optaplanner

Is there a way to set how many CPU optaplanner can use?
I found in docs what line should be added to solver config:
<moveThreadCount>4</moveThreadCount>
But solver config is uneditable from source tab and I don't see ability to add this field from the model

Related

Drools workbench 6.4.0.Final - Executing multiple rules that has insert(object) in drools is not working

I have a Guided table rule where 2- 3 rules get fired. And each fired rule creates a new Fact object by insert(object);
So the issue here is it some times creates 1 object and some times 2 objects.
So when ever 1 object is created, then on re trigger it creates 2 objects.
I feel some instability in the drools engine. Any quick suggestions are more appreciated. I am using drools work bench to create Guided rules
Drools version : 6.4.0.Final
Cheers,
Kalyan

Disable console output from the from the OPAL project?

I'm using the OPAL framework to implement static analyses. I wondered if it is possible to suppress the console output of the framework which is printed on the console while execution. The following shows a part of the output.
...
[info][OPAL] Bytecod Representation - Development Build (asserstions are enables)
[info][project configuration] the JDK is part of the analysis
[warn][project configuration] supertype information incomplete
...
I found that OPAL has several LogLevels (i.e. WARN, INFO, ERROR) but I couldn't find a way to specify the logging granularity. I'm really interested in warnings and errors but I would like to suppress the (massive) output at info level.
By now I figured out that it is possible to suppress the console output of OPAL. The OPAL logging mechanism uses several LogContext objects. It exists one GlobalLogContext and one LogContext per available Project. Since both are independent it is necessary to silence both types.
The first context is used for every logging event which does not happen in the context of a specific project whereas the project-specific context is used to log messages in a given context.
The OPALLogger.update(...) method can be used to update the Logger that is used for a LogContext. With this method, it is possible to associate a new Logger with a LogContext. If you are running OPAL with the command line, a ConsoleOPALLogger can be used as follows.
val project = ...
OPALLogger.updateLogger(project.logContext, new ConsoleOPALLogger(true, org.opalj.log.Error))
OPALLogger.updateLogger(GlobalLogContext, new ConsoleOPALLogger(true, org.opalj.log.Error))

Migrating Transformations in Pentaho PDI

We are using two servers, one as preprod and other as Production. When we are migrating jobs or Transformations from preprod to Prod it copies its connection properties as well and this affects our Production job execution.
Can someone let me know how to migrate transformations without coping it's connections to another server.
From the Tools->Options menu, there are two checkboxes that effect PDI's import behavior: "Replace existing objects on open/import" and "Ask before replacing objects".
Normally when migrating between environments, I set the first option to false. That way if a connection definition already exists, it is silently not replaced. The other way to go is to check both options on and answer 'No' when asked to replace an existing definition.
In this way, a transform/job that runs on pre-prod can simply be exported and imported into prod without changing anything, and it runs against prod in the new environment as long as the connections are named the same.
The only thing to watch out for is importing a new connection definition for the first time. There will be no warning that a new connection object is being created, and after import, it will still point to pre-prod. After each new connection import, you need to change the connection definition to point to the new environment. The good new is you only have to do that once.
I wish they had an option, or just an info dialog to show all new connection objects created as a result of the import; that way you would know exactly what you need to change. But alas -- earwax.
If by 'connection' you mean 'databases connection', JNDI allows you to give them a symbolic name independent of your environment : it is when you configure your environment (e.g. biserver or baserver) that you specify to which database (jdbc driver, IP and port,...) this symbolic name is related.
So your transformations don't contain any refrence to a server adress and you can deploy it "as is".
I use JNDI for my CDE dashboards in biserver too : to deploy a dashboard, I just export it from the dev environment and import it in the preprod environment without modifying anything.
There are a lot of resources on the web about JNDI. Check the Pentaho documentation too.

Modifying log4j.properties file on AWS Elastic MapReduce

I'm using AWS Elastic MapReduce and I would like to be able to set the logging level. For example, I would like for log.isDebugEnabled() to return true. A bit of googling led me to find this blog article:
http://vangjee.wordpress.com/2012/03/24/an-approach-to-controlling-logging-on-amazon-web-services-aws-elastic-mapreduce-emr/
which basically suggests writing a shell script to copy and overwrite the local log4j.properties file. This seems like a complicated approach. I would prefer a simpler way of setting the debug level. Is there any way?
There are 2 other ways:
Using hadoop daemonlog -setlevel command one can set the logging level for a given Hadoop daemon and classname.
Visit the jobtracker's web UI and set the log name's level.
The web UI url would be :
http://<host:port>/logLevel
but both these ways only set the log levels for the timespan till the daemons are running, as soon as they are restarted, they will pick-up the one's in log4j.properties.
Read more here.

Logging Frameworks

For a .NET application , we need a Logging Framework. The main requirements are
1)Support different kinds of Logging levels - like Debug , Warn etc
2)A new log file should be created , when the current file exceeds a particular size
3)The back up log files should be deleted after a configured amount of time period like for ex: - 1 day
Are there any frameworks that satisy 3rd criteria
Regards
Sabarish
Log4Net can do pretty much anything with custom appenders. Take a look here: How I can set log4net to log my files into different folders each day? or here: Can Log4Net Delete Log Files Automatically?
Serilog provides this; e.g.
var log = new LoggerConfiguration()
.WriteTo.RollingFile("C:\\Logs\\myapp-{Date}.txt",
fileSizeLimitBytes: 123456,
retainedFileCountLimit: 31)
.CreateLogger();
Files are rolled each day, with the size limit being more of a "safety" feature than a rolling strategy, but the results should match pretty closely with what you're looking for.
I wrote somewhat simple but sufficient framework for logging myself being tired of overcomplication of log4net, nlog etc. https://github.com/aloneguid/logmagic

Resources