ma The PerfectMatch… By multifarious.filkin.com Published On :: Tue, 12 Dec 2023 18:05:34 +0000 In the world of translation, Trados Studio’s PerfectMatch feature is like the overachieving student who always gets straight A’s, and its academic partner is the brilliant but slightly disorganised professor. PerfectMatch, with its meticulous and precise matching capabilities, often finds itself patiently sorting through the professor’s vast but somewhat chaotic repository of knowledge. Picture PerfectMatch … Continue reading The PerfectMatch… Full Article Studio Tips localization PerfectMatch powershell Trados Studio
ma API Management: The missing link for SOA success By sanjiva.weerawarana.org Published On :: Wed, 29 Aug 2012 13:32:00 +0000 Nearly 2 years ago I tweeted: Well, unfortunately, I had it a bit wrong. APIs and service do have a very direct and 1-1 relationship: an API is the interface of a service. However, what is different is that one's about the implementation and is focused on the provider, and the other is about using the functionality and is focused on the consumer. The service of course is what matters to the provider and API is what matters to the consumer. So its clearly more than just a new name. Services: If you build it will they come? One of the most common anti-patterns of SOA is the one service - one client pattern. That's when the developer who wrote the service also wrote its only client. In that case there's no sharing, no common data, no common authentication and no reuse of any kind. The number one reason for SOA (improving productivity by reusing functionality as services) is gone. Its simply client-server at the cost of having to use interoperable formats like XML, JSON, XML Schema, WSDL and SOAP. There are two primary reasons for this pattern being so prevalent: first is due to a management failure whereby everyone is required to create services for whatever they do because that's the new "blessed way". There's no architectural vision driving proper factoring. Instead its each person or at least each team for themselves. The resulting services are only really usable for that one scenario - so no wonder no one else uses them! Writing services that can service many users requires careful design and thinking and willingness to invest in the common good. That's against human intuition and something that will happen only if its properly guided and incentivized. The cost of writing common services must be paid by someone and will not happen by itself. That's in effect the second reason why this anti-pattern exists: the infrastructure in place for SOA does not support or encourage reuse. Even if you had a service that is reusable how do you find out how well it works? How do you know how many people are using it? Do you know what time of day they use it most? Do you know which operations of your service get hit the hardest? Next, how do others even find out you wrote a service and it may do what they need? SOA Governance (for which WSO2 has an excellent product: WSO2 Governance Registry) is not focused on encouraging service reuse but rather on governing the creation and management of services. The SOA world has lacked a solution for making it easy to help people discover available services and to manage and monitor their consumption. API Management What's an API? Its the interface to a service. Simple. In other words, if you don't have any services, you have no APIs to expose and manage. API Management is about managing the entire lifecycle of APIs. This involves someone who publishes the interface of a service into a store of some kind. Next it involves developers who browse the store to find APIs they care about and get access to them (typically by acquiring an access token of some sort) and then the developers using those keys to program accesses to the service via its interface. Why is this important? In my opinion, API Management is to SOA what Amazon EC2 is to Virtualization. Of course virtualization has been around for a long time, but EC2 changed the game by making it trivially simple for someone to get a VM. It brought self service, serendipitous consumption, and elasticity to virtualization. Similarly, API Management brings self service & serendipitous consumption by allowing developers to discover, try and use services without requiring any type of "management approval". It allows consumers to not have to worry about scaling - they just indicate the desired SLA (typically in the form of a subscription plan) and its up to the provider to make it work right. API Management & SOA are married at the hip If you have an SOA strategy in your organization but don't have an API Management plan then you are doomed to failure. Notice that I didn't even talk about externally exposing APIs- even internal service consumption should be managed through an API Management system so that everyone has clear visibility into who's using what service and how much is used when. Its patently obvious why external exposition of services requires API Management. Chris Haddad, WSO2's VP of Technology Evangelism, recently wrote a superb whitepaper that discusses and explain the connection between SOA and API Management. Check out Promoting service reuse within your enterprise and maximizing SOA success and I can guarantee you will leave enlightened. In May this year, a blog on highscalability.com talked about how "Startups Are Creating A New System Of The World For IT". In that the author talked about open source as the foundation of this new system and SOA as the load bearing walls of the new IT landscape. I will take it to the next level and say that API Management is the roof of the new IT house. WSO2 API Manager We recently introduced an API Management product: WSO2 API Manager. This product comes with an application for API Providers to create and manage APIs, a store application for API Developers to discover and consume APIs and a gateway to route API traffic through. Of course all parts of the product can be scaled horizontally to deal with massive loads. The WSO2 API Manager can be deployed either for internal consumption, external consumption or both. As with any other WSO2 product, this too is 100% open source. After you read Chris' whitepaper download this product and sit it next to your SOA infrastructure (whether its from us or not) and see what happens! Full Article api cloud soa wso2
ma Congratulations Dr. Malinda Kaushalye Kapuruge! By sanjiva.weerawarana.org Published On :: Sun, 23 Jun 2013 06:47:00 +0000 It gives me great pleasure to post extremely belated (he completed in October last year!) congratulations to Dr. Malinda Kaushalye Kapuruge on his completing his Ph.D. in Computer Science from Swinburne University in Australia. Kau's thesis topic was "Orchestration as Organization: Using an organisational paradim to achieve adaptable business process modelling and enactment in service compositions" and was supervised by Prof. Jun Han and Dr. Alan Colman. Kau's going to stay on in Swinburne as a Research Scientist for some time. Kaushalye worked in WSO2 for 2 years from 2006 to 2008 before going to grad school to pursue his Ph.D. work. Congratulations and good luck! (I'm going to post a few catch up congratulations so I can be up to date :-).) Full Article grad school sri lanka
ma El problema no es la IA By traduccionjuridica.es Published On :: Wed, 10 Jul 2024 08:07:29 +0000 Hay mucha controversia en torno a la Inteligencia Artificial. Algunos piensan que es la causa de la pérdida de trabajos y la bajada de tarifas. Pero no es así. Te lo contamos aquí. It’s the economy, stup$#. Con esta frase (y la palabrota) ganó unas... La entrada El problema no es la IA aparece primero en Traducción Jurídica. Full Article Club Grandes Traductores formación para traductores Inteligencia Artificial traducción automática Traducción jurídica
ma Próximas convocatorias de nuestros cursos en línea By traduccionjuridica.es Published On :: Wed, 21 Aug 2024 10:54:38 +0000 Si quieres darle un impulso a tu negocio en el mes de septiembre no te pierdas todas estas propuestas de formación en línea para traductores que te traemos. Sigue leyendo. Se nota que estáis volviendo ya de vacaciones. Estamos recibiendo muchas reservas para los cursos... La entrada Próximas convocatorias de nuestros cursos en línea aparece primero en Traducción Jurídica. Full Article Cursos de traducción Club de los Grandes Traductores cursos de traducción formación para traductores traducción financiera Traducción jurídica traducción médica
ma SPDY - 90% of all Gmail Traffic via Chrome/HTTPS By pzf.fremantle.org Published On :: Wed, 19 Jan 2011 18:14:00 +0000 Anyone watching the SPDY-DEV group will have heard something pretty amazing today. SPDY is a proposed alternative to HTTP from Google. It was launched in November 2009 and has been chugging along making progress ever since, but frankly without much further buzz. Today, Google admitted on the mailing list that 90% of all Gmail and other Google services go via SPDY when the client is Chrome and SSL is used: Yes, indeed SPDY is enabled in Chrome and on Google servers for all SSL traffic at this point. (Actually, we do 90% on SPDY, with a 10% holdback for purposes of A/B comparisons). This is pretty damn significant real world usage. Full Article
ma Understanding ESB Performance & Benchmarking By pzf.fremantle.org Published On :: Tue, 18 Sep 2012 20:51:00 +0000 ESB performance is a hot (and disputed topic). In this post I don't want to talk about different vendors or different benchmarks. I'm simply trying to help people understand some of the general aspects of benchmarking ESBs and what to look out for in the results. The general ESB model is that you have some service consumer, an ESB in the middle and a service provider (target service) that the ESB is calling. To benchmark this, you usually have a load driver client, an ESB, and a dummy service. +-------------+ +---------+ +---------------+ | Load Driver |------| ESB |------| Dummy Service | +-------------+ +---------+ +---------------+ Firstly, we want the Load Driver (LD), the ESB and the Dummy Service (DS) to be on different hardware. Why? Because we want to understand the ESB performance, not the performance of the DS or LD. The second thing to be aware of is that the performance results are completely dependent on the hardware, memory, network, etc used. So never compare different results from different hardware. Now there are three things we could look at: A) Same LD, same DS, different vendors ESBs doing the same thing (e.g. content-based routing) B) Same LD, same DS, different ESB configs for the same ESB, doing different things (e.g. static routing vs content-based routing) C) Going via ESB compared to going Direct (e.g. LD--->DS without ESB) Each of these provides useful data but each also needs to be understood. Metrics Before looking at the scenarios, lets look at how to measure the performance. The two metrics that are always a starting point in any benchmark of an ESB here are the throughput (requests/second) and the latency (how long each request takes). With latency we can consider overall latency - the time taken for a completed request observed at the LD, and the ESB latency, which is the time taken by the message in the ESB. The ESB latency can be hard to work out. A well designed ESB will already be sending bytes to the DS before its finished reading the bytes the LD has sent it. This is called pipelining. Some ESBs attempt to measure the ESB latency inside the ESB using clever calculations. Alternatively scenario C (comparing via ESB vs Direct) can give an idea of ESB Latency. But before we look at the metrics we need to understand the load driver. There are two different models to doing Load Driving: 1) Do a realistic load test based on your requirements. For example if you know you want to support up to 50 concurrent clients each making a call every 5 seconds on average, you can simulate this. 2) Saturation! Have a large number of clients, each making a call as soon as the last one finishes. The first one is aimed at testing what the ESB does before its fully CPU loaded. In other words, if you are looking to see the effect of adding an ESB, or the comparison of one ESB to another under realistic load, then #1 is the right approach. In this approach, looking at throughput may not be useful, because all the different approaches have similar results. If I'm only putting in 300 requests a sec on a modern system, I'm likely to see 300 request a sec. Nothing exciting. But the latency is revealing here. If one ESB responds in less time than another ESB thats a very good sign, because with the same DS the average time per request is very telling. On the other hand the saturation test is where the throughput is interesting. Before you look at the throughput though, check three things: 1) Is the LD CPU running close to 100%? 2) Is the DS CPU running close to 100%? 3) Is the network bandwidth running close to 100%? If any of these are true, you aren't doing a good test of the ESB throughput. Because if you are looking at throughput then you want the ESB to be the bottleneck. If something else is the bottleneck then the ESB is not providing its max throughput and you aren't giving it a fair chance. For this reason, most benchmarks use a very very lightweight LD or a clustered LD, and similarly use a DS that is superfast and not a realistic DS. Sometimes the DS is coded to do some real work or sleep the thread while its executing to provide a more realistic load test. In this case you probably want to look at latency more than throughput. Finally you are looking to see a particular behaviour for throughput testing as you increase load. Throughput vs Load The shape of this graph shows an ideal scenario. As the LD puts more work through the ESB it responds linearly. At some point the CPU of the ESB hits maximum, and then the throughput stabilizes. What we don't want to see is the line drooping at the far right. That would mean that the ESB is crumpling under the extra load, and its failing to manage the extra load effectively. This is like the office worker whose efficiency increases as you give them more work but eventually they start spending all their time re-organizing their todo lists and less work overall gets done. Under the saturation test you really want to see the CPU of the ESB close to 100% utilised. Why? This is a sign that its doing as much as possible. Why would it not be 100%? Two reasons: I/O, multi-processing and thread locks: either the network card or disk or other I/O is holding it up, the code is not efficiently using the available cores, or there are thread contention issues. Finally its worth noting that you expect the latency to increase a lot under the saturation test. A classic result is this: I do static routing for different size messages with 100 clients LD. For message sizes up to 100k maybe I see a constant 2ms overhead for using the ESB. Suddenly as the message size grows from 100k to 200k I see the overhead growing in proportion to the message size. Is this such a bad thing? No, in fact this is what you would expect. Before 100K message size, the ESB is underloaded. The straight line up to this point is a great sign that the ESB is pipelining properly. Once the CPU becomes loaded, each request is taking longer because its being made to wait its turn at the ESB while the ESB deals with the increased load. A big hint here: When you look at this graph, the most interesting latency numbers occur before the CPU is fully loaded. The latency after the CPU is fully loaded is not that interesting, because its simply a function of the number of queued requests. Now we understand the metrics, lets look at the actual scenarios. A. Different Vendors, Same Workload For the first comparison (different vendors) the first thing to be careful of is that the scenario is implemented in the best way possible in each ESB. There are usually a number of ways of implementing the same scenario. For example the same ESB may offer two different HTTP transports (or more!). For example blocking vs non-blocking, servlet vs library, etc. There may be an optimum approach and its worth reading the docs and talking to the vendor to understand the performance tradeoffs of each approach. Another thing to be careful of in this scenario is the tuning parameters. Each ESB has various tuning aspects that may affect the performance depending on the available hardware. For example, setting the number of threads and memory based on the number of cores and physical memory may make a big difference. Once you have your results, assuming everything we've already looked at is tickety-boo, then both latency and throughput are interesting and valid comparisons here. B. Different Workloads, Same Vendor What this is measuring is what it costs you to do different activities with the same ESB. For example, doing a static routing is likely to be faster than a content-based routing, which in turn is faster than a transformation. The data from this tells you the cost of doing different functions with the ESB. For example you might want to do a security authentication/authorization check. You should see a constant bump in latency for the security check, irrespective of message size. But if you were doing complex transformation, you would expect to see higher latency for larger messages, because they take more time to transform. C. Direct vs ESB This is an interesting one. Usually this is done for a simple static routing/passthrough scenario. In other words, we are testing the ESB doing its minimum possible. Why bother? Well there are two different reasons. Firstly ESB vendors usually do this for their own benefit as a baseline test. In other words, once you understand the passthrough performance you can then see the cost of doing more work (e.g. logging a header, validating security, transforming the message). Remember the two testing methodologies (realistic load vs saturation)? You will see very very different results in each for this, and the data may seem surprising. For the realistic test, remember we want to look at latency. This is a good comparison for the ESB. How much extra time is spent going through the ESB per request under normal conditions. For example, if the average request to the backend takes 18ms and the average request via the ESB takes 19ms, we have an average ESB latency of 1ms. This is a good result - the client is not going to notice much difference - less than 5% extra. The saturation test here is a good test to compare different ESBs. For example, suppose I can get 5000 reqs/sec direct. Via ESB_A the number is 3000 reqs/sec and via ESB_B the number is 2000 reqs/sec, I can say that ESB_A is providing better throughput than ESB_B. What is not a good metric here is comparing throughput in saturation mode for direct vs ESB. Why not? The reason here is a little complex to explain. Remember how we coded DS to be as fast as possible so as not to be a bottleneck? So what is DS doing? Its really just reading bytes and sending bytes as fast as it can. Assuming the DS code is written efficiently using something really fast (e.g. just a servlet), what this is testing is how fast the hardware (CPU plus Network Card) can read and write through user space in the operating system. On a modern server hardware box you might get a very high number of transactions/sec. Maybe 5000req/s with each message in and out being 1k in size. So we have 1k in and 1k out = 2k IO. 2k IO x 5000 reqs/sec x 8bits gives us the total network bandwidth of 80Mbits/sec (excluding ethernet headers and overhead). Now lets look at the ESB. Imagine it can handle 100% of the direct load. There is no slowdown in throughput for the ESB. For each request it has to read the message in from LD and send it out to DS. Even if its doing this in pipelining mode, there is still a CPU cost and an IO cost for this. So the ESB latency of the ESB maybe 1ms, but the CPU and IO cost is much higher. Now, for each response it also has to read it in from DS and write it out to LD. So if the DS is doing 80Mbits/second, the ESB must be doing 160Mbits/second. Here is a picture. Now if the LD is good enough, it will have loaded the DS to the max. CPU or IO capacity or both will be maxed out. Suppose the ESB is running on the same hardware platform as the DS. If the DS machine can do 80Mbit/s flat out, there is no way that the same hardware running as an ESB can do 160Mbit/s! In fact, if the ESB and DS code are both as efficient as possible, then the throughput via ESB will always be 50% of the throughput direct to the DS. Now there is a possible way for the ESB to do better: it can be better coded than the DS. For example, if the ESB did transfers in kernel space instead of user space then it might make a difference. The real answer here is to look at the latency. What is the overhead of adding the ESB to each request. If the ESB latency is small, then we can solve this problem by clustering the ESB. In this case we would put two ESBs in and then get back to full throughput. The real point of this discussion is that this is not a useful comparison. In reality backend target services are usually pretty slow. If the same dual core server is actually doing some real work - e.g. database lookups, calculations, business logic - then its much more likely to be doing 500 requests a second or even less. The following chart shows real data to demonstrate this. The X-Axis shows increasing complexity of work at the backend (DS). As the effort taken by the backend becomes more realistic, the loss in throughput of having an ESB in the way reduces. So with a blindingly fast backend, we see the ESB struggling to provide just 55% of the throughput of the direct case. But as the backend becomes more realistic, we see much better numbers. So at 2000 requests a second there is barely a difference (around 10% reduction in throughput). In real life, what we actually see is that often you have many fewer ESBs than backend servers. For example, if we took the scenario of a backend server that can handle 500 reqs/sec, then we might end up with a cluster of two ESBs handling a cluster of 8 backends. Conclusion I hope this blog has given a good overview of ESB performance and benchmarking. In particular, when is a good idea to look at latency and when to use throughput. Full Article
ma A rose by any other name would smell as sweet, but with no name, maybe not By pzf.fremantle.org Published On :: Thu, 27 Mar 2014 14:17:00 +0000 The famous quotation from Shakespeare is that "a rose by any other name would smell as sweet". But what if the rose had no name. What if every time you talked about it, you had to come up with a description, you know that thing with the pretty pink petals, except sometimes they are red, and sometimes white, but it smells really nice, except some don't really smell and others do. You know the thing with multiple layers of petals except for the wild ones that only have one layer of petals. Maybe not so sweet. What about the other way round? You build a really cool system that works effectively and then it turns out that someone has named it? Now that is nice, and yes, your thing suddenly smells sweeter. I've had this happen a lot. When we first started WSO2 we applied a lot of cool approaches that we learnt from Apache. But they weren't about Open Source, they were about Open Source Development. And when they got names it became easier to explain. One aspect of that is Agile. We all know what Agile means and why its good. Another aspect is Meritocracy. So now I talk about a meritocratic, agile development team and people get me. It helps them to understand why WSO2 is a good thing. When Sanjiva and I started WSO2 we wanted to get rid of EJBs: we wanted to remove the onion-layers of technology that had built up in middleware and create a simpler, smaller, more effective stack. It turns out we created lean software, and that is what we call it today. We also create orthogonal (or maybe even orthonormal) software. That term isn't so well understood, but if you are a mathematician you will get what we mean. Why am I suddenly talking about this? Because today, Srinath posted a note letting me know that something else we have been doing for a while has a nice name. It turns out that the architecture we promote for Big Data analysis, you know, the one where we pipe the data through an event bus, into both real-time complex event processing and also into Cassandra where we apply Hive running on Hadoop to crunch it up and batch analyse it, and then store it either in a traditional SQL database for reports to be generated, or occasionally in different Cassandra NoSQL tables, you know that architecture? Aha! Its the Lambda Architecture. And yes, its so much easier to explain now its got a nice name. Read more here: http://srinathsview.blogspot.co.uk/2014/03/implementing-bigdata-lambda.html Full Article
ma Normas para la escritura científica By novicetranslators.blogspot.com Published On :: Wed, 15 Feb 2012 13:17:00 +0000 La ciencia en español no se escribe como en inglés El lenguaje científico se distingue porque trata un campo concreto del saber y porque se suele dirigir a especialistas o profesionales de ese campo del saber, por lo que usa una terminología específica. Además de que los vocablos utilizados resulten extraños a un hablante lego en la materia, el significado de esos vocablos es preciso y objetivo, y puede ser distinto al que se conoce en el lenguaje común. Hace ya años que el inglés se ha erigido como idioma de comunicación en las ciencias experimentales. Como consecuencia, cualquier investigador científico que quiera estar al día y divulgar sus conocimientos se verá obligado a leer, escribir y publicar básicamente en ese idioma, con lo que el hábito de comunicarse en inglés acaba corrompiendo su idioma materno con expresiones y usos anglicistas innecesarios, que se transmiten luego al resto de la población igualmente entendida con la que se interrelaciona—profesores, periodistas, científicos—, con lo cual entran fácilmente en circulación. El descuido con el que se trata el tema de la escritura científica en español es, por tanto, enorme a pesar de que también existen organismos y colecciones de reglas que orientan en el correcto empleo del español a la hora de traducir o crear un documento científico. En las páginas de este pequeño manual interactivo se pretende ofrecer, tanto a profesores como a estudiantes, un marco conceptual y descriptivo en torno al lenguaje científico, de acuerdo con los parámetros de la ciencia. Nada nos gustaría más que servir de referencia para el estudio, producción y corrección de cualquier tipo de texto científico. Autor: Gonzalo Claros. [Al pie se encuentran enlaces con más recursos, aunque no todos funcionan, y el archivo en PDF para imprimir]. Full Article ciencia español estilo guia ingles lenguaje cientifico manual recursos
ma ¡Guau! Buscador de bibliografía científica. Fuente infinita de información By novicetranslators.blogspot.com Published On :: Tue, 28 Feb 2012 22:28:00 +0000 SciVerse Full Article bibliografía científica recursos
ma How to Make More $$ without Really Trying By novicetranslators.blogspot.com Published On :: Wed, 04 Jul 2012 14:14:00 +0000 Have you noticed how different businesses are constantly inventing new ways to nickel and dime their customers? Airlines have first stopped offering free in-flight meals; then introduced a fee for checked-in luggage; they reduced the leg space of their seats and are now charging for "special" (exit row and bulkhead) seats that leave you less cramped at the end of a two-hour flight. At the same time, they tacked a host of fees to their ticket prices, so that now the amount of these fees (and government taxes) exceeds the nominal price of the ticket itself. Other businesses are not bashful either in inventing sneaky ways to separate you from your money. A carton of orange juice, which used to contain half a gallon (64 oz) of juice now contains only 59 oz. A can of coffee, which at one time contained a pound (16 oz) now contains 12, 11, or 10.5 oz. And don't expect the price to go down for these shrinking quantities of products. Gas stations are now adding a separate fee, in addition to their already sky-high prices, for credit card payments. Some physicians are charging thousands of dollars in yearly fees just to keep you as a patient (no specific service included). The list may go on and on. Many of these and other businesses count on their customers being stupid or at least not paying attention to what they are being charged for and how much they are getting. Of course, they're also trying to compensate for their own rising costs (in part due to similar tactics by other businesses) and shrinking revenues due to the recession. So, why don't we, translators, get imaginative and enhance our incomes by adding a few items to our rates? I envision my future bill to my clients to look something like this: Translation 50.00 Availability fee 2.50 Keyboarding fee 3.00 Administrative fee 2.00 Billing fee 1.50 Software usage fee 1.75 Hardware usage fee 1.80 e-mailing fee 1.65 Alertness fee* 1.60 Innovation fee ** 2.50 Bundling fee*** 2.00 Total payable 70.30 * That's for the espresso to keep me awake while I'm translating. ** That's for inventing all these possible and impossible fees. *** Let them figure out what this means (you can use any random word from the dictionary here). Feel free to add your own bright ideas to this list. Thanks to witty Gabe Bokor from Translation Journal Blog. Full Article fees rates translation fees translation journal blog
ma Training for legal translators. Part IV. Make at least one big study commitment. By legalspaintrans.com Published On :: Mon, 04 Feb 2019 08:55:05 +0000 This is the last part of a series on training for legal translators. See the first post here. To put yourself on the path to becoming a good legal translator, you need to make one big study commitment. A big study commitment is anything that takes at least a year, challenges you, and costs a lot of […] Full Article CPD Legal translation MA Legal Translation legal translation
ma Dilemmas of style when translating legislation By legalspaintrans.com Published On :: Tue, 30 Jun 2020 10:39:02 +0000 Many dilemmas of style arise when translating legislation into English. This post looks at the most common ones and solutions to them. By legislation I mean any laws or rules set down by a governing body, be that of a country, company or university. So everything from a criminal code to a health and safety […] Full Article Legal translation legislation translation
ma Spain’s main registros: translations and background info By legalspaintrans.com Published On :: Wed, 13 Sep 2023 11:57:22 +0000 This post looks at the main public registers (registros) in Spain. It outlines what they are for and lists common translations and similar entities in English-speaking countries. Whether each registro is more a list or a place (or both) is also covered. This key distinction (described in this post) can affect the translation or at […] Full Article Legal Spain Legal Spain Term Legal translation Spanish legal system
ma Marc Fesneau : « Que chacun cesse d’avoir en ligne de mire son agenda personnel en vue de 2027 » By www.lepoint.fr Published On :: Wed, 13 Nov 2024 06:25:00 +0100 INTERVIEW. Pour le patron du groupe MoDem a l'Assemblee, il faut d'urgence sortir du bal des ego qui mine le << socle commun >> de Barnier et << etre utile aux Francais >>. Full Article
ma Gauche radicale : demandez le pogrom ! By www.lepoint.fr Published On :: Wed, 13 Nov 2024 07:00:00 +0100 CHRONIQUE. Les accusations de << genocide >> contre Israel se multiplient, revelant la derive antisioniste d'une partie de la gauche radicale. Full Article
ma Omar Youssef Souleimane : « Ce que j’ai entendu dans des classes de banlieue » By www.lepoint.fr Published On :: Wed, 13 Nov 2024 09:00:00 +0100 L'ecrivain d'origine syrienne a anime dans des colleges franciliens des ateliers organises pour lutter contre la radicalisation et promouvoir la laicite. Il temoigne. Full Article
ma Marine Le Pen dans le piège de « l’inéligibilité obligatoire » By www.lepoint.fr Published On :: Wed, 13 Nov 2024 08:00:00 +0100 Alors que les procureurs doivent requerir, ce mercredi 13 novembre a son proces, la presidente du groupe RN a l'Assemblee redoute, par-dessus tout, d'etre empechee de se presenter en 2027. Full Article
ma Jean-Marie Le Pen hospitalisé à la suite d’un état de faiblesse général By www.lepoint.fr Published On :: Wed, 13 Nov 2024 14:47:00 +0100 Le fondateur du Front national, age de 96 ans et qui a mis un terme a sa vie politique, est hospitalise afin de passer plusieurs examens, par << souci de securite >>. Full Article
ma María Moliner: ‘Diccionario de uso del español’ By blog.lengua-e.com Published On :: Mon, 22 Apr 2024 15:02:23 +0000 El Diccionario de uso del español de María Moliner es una obra monumental. Son dos volúmenes que la autora iba elaborando pacientemente en el salón […] Origen Full Article diccionarios Diccionario de uso del español escribir María Moliner
ma Integrating Personal Web Data through Semantically Enhanced Web Portal By www.jucs.org Published On :: 2011-03-18T16:21:08+01:00 Currently, the World Wide Web is mostly composed of isolated and loosely connected "data islands". Connecting them together and retrieving only the information that is of interest to the user is the common Web usage process. Creating infrastructure that would support automation of that process by aggregating and integrating Web data in accordance to user's personal preferences would greatly improve today's Web usage. A significant part of Web data is available only through the login and password protected applications. As that data is very important for the usefulness of described process, proposed infrastructure needs to support authorized access to user's personal data. In this paper we propose a semantically enhanced Web portal that presents unique personalized user's entry to the domain-specific Web information. We also propose an identity management system that supports authorized access to the protected Web data. To verify the proposed solution, we have built Sweb - a semantically enhanced Web portal that uses proposed identity management system. Full Article
ma Information Consolidation in Large Bodies of Information By www.jucs.org Published On :: 2011-03-18T16:21:18+01:00 Due to information technologies the problem we are facing today is not a lack of information but too much information. This phenomenon becomes very clear when we consider two figures that are often quoted: Knowledge is doubling in many fields (biology, medicine, computer science, ...) within some 6 years; yet information is doubling every 8 months! This implies that the same piece of information/knowledge is published a large number of times with small variations. Just look at an arbitrary news item. If considered of some general interest reports of it will appear in all major newspapers, journals, electronic media, etc. This is also the problem with information portals that tie together a number of large databases. It is our contention that we need methods to reduce the huge set of information concerning a particular topic to a number of pieces of information (let us call each such piece an "essay" in what follows) that present a good cross-section of potential points of view. We will explain why one essay is usually not enough, yet the problem of reducing a huge amount of contributions to a digestible number of essays is formidable, indeed is science fiction at the moment. We will argue in this paper that it is one of the important tasks of computer sciences to start tackling this problem, and we will show that in some special cases partial solutions are possible. Full Article
ma Document Retrieval Using SIFT Image Features By www.jucs.org Published On :: 2011-04-07T14:38:22+02:00 This paper describes a new approach to document classification based on visual features alone. Text-based retrieval systems perform poorly on noisy text. We have conducted series of experiments using cosine distance as our similarity measure, selecting varying numbers local interest points per page, and varying numbers of nearest neighbour points in the similarity calculations. We have found that a distance-based measure of similarity outperforms a rank-based measure except when there are few interest points. We show that using visual features substantially outperforms textbased approaches for noisy text, giving average precision in the range 0.4-0.43 in several experiments retrieving scientific papers. Full Article
ma The Use of Latent Semantic Indexing to Mitigate OCR Effects of Related Document Images By www.jucs.org Published On :: 2011-04-07T14:38:42+02:00 Due to both the widespread and multipurpose use of document images and the current availability of a high number of document images repositories, robust information retrieval mechanisms and systems have been increasingly demanded. This paper presents an approach to support the automatic generation of relationships among document images by exploiting Latent Semantic Indexing (LSI) and Optical Character Recognition (OCR). We developed the LinkDI (Linking of Document Images) service, which extracts and indexes document images content, computes its latent semantics, and defines relationships among images as hyperlinks. LinkDI was experimented with document images repositories, and its performance was evaluated by comparing the quality of the relationships created among textual documents as well as among their respective document images. Considering those same document images, we ran further experiments in order to compare the performance of LinkDI when it exploits or not the LSI technique. Experimental results showed that LSI can mitigate the effects of usual OCR misrecognition, which reinforces the feasibility of LinkDI relating OCR output with high degradation. Full Article
ma Color Image Restoration Using Neural Network Model By www.jucs.org Published On :: 2011-04-07T14:38:54+02:00 Neural network learning approach for color image restoration has been discussed in this paper and one of the possible solutions for restoring images has been presented. Here neural network weights are considered as regularization parameter values instead of explicitly specifying them. The weights are modified during the training through the supply of training set data. The desired response of the network is in the form of estimated value of the current pixel. This estimated value is used to modify the network weights such that the restored value produced by the network for a pixel is as close as to this desired response. One of the advantages of the proposed approach is that, once the neural network is trained, images can be restored without having prior information about the model of noise/blurring with which the image is corrupted. Full Article
ma Developing a Mobile Collaborative Tool for Business Continuity Management By www.jucs.org Published On :: 2011-07-08T12:29:58+02:00 We describe the design of a mobile collaborative tool that helps teams managing critical computing infrastructures in organizations, a task that is usually designated Business Continuity Management. The design process started with a requirements definition phase based on interviews with professional teams. The elicited requirements highlight four main concerns: collaboration support, knowledge management, team performance, and situation awareness. Based on these concerns, we developed a data model and tool supporting the collaborative update of Situation Matrixes. The matrixes aim to provide an integrated view of the operational and contextual conditions that frame critical events and inform the operators' responses to events. The paper provides results from our preliminary experiments with Situation Matrixes. Full Article
ma An Empirical Study on Human and Information Technology Aspects in Collaborative Enterprise Networks By www.jucs.org Published On :: 2011-07-08T12:30:00+02:00 Small and Medium Enterprises (SMEs) face new challenges in the global market as customers require more complete and flexible solutions and continue to drastically reduce the number of suppliers. SMEs are trying to address these challenges through cooperation within collaborative enterprise networks (CENs). Human aspects constitute a fundamental issue in these networks as people, as opposed to organizations or Information Technology (IT) systems, cooperate. Since there is a lack of empirical studies on the role of human factors in IT-supported collaborative enterprise networks, this paper addresses the major human aspects encountered in this type of organization. These human aspects include trust issues, knowledge and know-how sharing, coordination and planning activities, and communication and mutual understanding, as well as their influence on the business processes of CENs supported by IT tools. This paper empirically proves that these aspects constitute key factors for the success or the failure of CENs. Two case studies performed on two different CENs in Switzerland are presented and the roles of human factors are identified with respect to the IT support systems. Results show that specific human factors, namely trust and communication and mutual understanding have to be well addressed in order to design and develop adequate software solutions for CENs. Full Article
ma Managing Mechanisms for Collaborative New-Product Development in the Ceramic Tile Design Chain By www.jucs.org Published On :: 2011-07-08T12:30:02+02:00 This paper focuses on improving the management of New-Product Development (NPD) processes within the particular context of a cluster of enterprises that cooperate through a network of intra- and inter-firm relations. Ceramic tile design chains have certain singularities that condition the NPD process, such as the lack of a strong hierarchy, fashion pressure or the existence of different origins for NPD projects. We have studied these particular circumstances in order to tailor Product Life-cycle Management (PLM) tools and some other management mechanisms to fit suitable sectoral reference models. Special emphasis will be placed on PLM templates for structuring and standardizing projects, and also on the roles involved in the process. Full Article
ma Coordinated System for Real Time Muscle Deformation during Locomotion By www.jucs.org Published On :: 2011-04-24T11:14:51+02:00 This paper presents a system that simulates, in real time, the volumetric deformation of muscles during human locomotion. We propose a two-layered motion model. The requirements of realism and real time computation lead to a hybrid locomotion system that uses a skeleton as first layer. The muscles, represented by an anatomical surface model, constitute the second layer, whose deformations are simulated with a finite element method (FEM). The FEM subsystem is fed by the torques and forces got from the locomotion system, through a line of action model, and takes into account the geometry and material properties of the muscles. High level parameters (like height, weight, physical constitution, step frequency, step length or speed) allow to customize the individuals and the locomotion and therefore, the deformation of the persons' muscles. Full Article
ma Pragmatic Knowledge Services By www.jucs.org Published On :: 2011-04-24T11:15:39+02:00 Knowledge, innovations and their implementation in effective practices are essential for development in all fields of societal action, e.g. policy, business, health, education, and everyday life. However, managing the interrelations between knowledge, innovation and practice is complicated. Facilitation by suitable knowledge services is needed. This paper explores the theory of converging knowledge, innovation, and practice, discusses some advances in information systems development, and identifies general requirements for pragmatic knowledge services. A trialogical approach to knowledge creation and learning is adopted as a viable theoretical basis. Also three examples of novel knowledge services Opasnet, Innovillage, and Knowledge Practices Environment (KPE), are presented. Eventually, it is concluded that pragmatic knowledge services, as hybrid systems of information technology and its users, are not only means for creation of practical knowledge, but vehicles of a cultural change from individualistic perceptions of knowledge work towards mediated collaboration. Full Article
ma Markup upon Video - towards Dynamic and Interactive Video Annotations By www.jucs.org Published On :: 2011-07-08T12:31:47+02:00 Interactive video is increasingly becoming a more and more dominant feature of our media platforms. Especially due to the popular YouTube annotations framework, integrating graphical annotations in a video has become very fashionable these days. However, the current options are limited to a few graphical shapes for which the user can define as good as no dynamic behaviour. Despite the enormous demand for easy-creatable, interactive video there are no such advanced tools available. In this article we describe an innovative approach, to realize dynamics and interactivity of video annotations. First we explain basic concepts of video-markup like the generic element model and visual descriptors. After that we introduce the event-tree model, which can be used to define event-handling in an interactive video formally as well as visually. By combining these basic concepts, we can give an effective tool to the video community for realizing interactive and dynamic video in a simple, intuitive and focused way. Full Article
ma Automatically Checking Feature Model Refactorings By www.jucs.org Published On :: 2011-05-06T16:03:26+02:00 A feature model (FM) defines the valid combinations of features, whose combinations correspond to a program in a Software Product Line (SPL). FMs may evolve, for instance, during refactoring activities. Developers may use a catalog of refactorings as support. However, the catalog is incomplete in principle. Additionally, it is non-trivial to propose correct refactorings. To our knowledge, no previous analysis technique for FMs is used for checking properties of general FM refactorings (a transformation that can be applied to a number of FMs) containing a representative number of features. We propose an efficient encoding of FMs in the Alloy formal specification language. Based on this encoding, we show how the Alloy Analyzer tool, which performs analysis on Alloy models, can be used to automatically check whether encoded general and specific FM refactorings are correct. Our approach can analyze general transformations automatically to a significant scale in a few seconds. In order to evaluate the analysis performance of our encoding, we evaluated in automatically generated FMs ranging from 500 to 2,000 features. Furthermore, we analyze the soundness of general transformations. Full Article
ma An Aspect-Oriented Framework for Weaving Domain-Specific Concerns into Component-Based Systems By www.jucs.org Published On :: 2011-05-06T16:03:36+02:00 Software components are used in various application domains, and many component models and frameworks have been proposed to fulfill domain-specific requirements. The general trend followed by these approaches is to provide ad-hoc models and tools for capturing these requirements and for implementing their support within dedicated runtime platforms, limited to features of the targeted domain. The challenge is then to propose more flexible solutions, where components reuse is domain agnostic. In this article, we present a framework supporting compositional construction and development of applications that must meet various extra-functional/domain-specific requirements. The key points of our contribution are: i) We target development of component-oriented applications where extra-functional requirements are expressed as annotations on the units of composition in the application architecture. ii) These annotations are implemented as open and extensible component-based containers, achieving full separation of functional and extra-functional concerns. iii) Finally, the full machinery is implemented using the Aspect-Oriented Programming paradigm. We validate our approach with two case studies: the first is related to real-time and embedded applications, while the Full Article
ma Context-Aware Composition and Adaptation based on Model Transformation By www.jucs.org Published On :: 2011-05-06T16:03:43+02:00 Using pre-existing software components (COTS) to develop software systems requires the composition and adaptation of the component interfaces to solve mismatch problems. These mismatches may appear at different interoperability levels (signature, behavioural, quality of service and semantic). In this article, we define an approach which supports composition and adaptation of software components based on model transformation by taking into account the four levels. Signature and behavioural levels are addressed by means of transition systems. Context-awareness and semanticbased techniques are used to tackle quality of service and semantic, respectively, but also both consider the signature level. We have implemented and validated our proposal for the design and application of realistic and complex systems. Here, we illustrate the need to support the variability of the adaptation process in a context-aware pervasive system through a real-world case study, where software components are implemented using Windows Workflow Foundation (WF). We apply our model transformation process to extract transition systems (CA-STS specifications) from WF components. These CA-STSs are used to tackle the composition and adaptation. Then, we generate a CASTS adaptor specification, which is transformed into its corresponding WF adaptor component with the purpose of interacting with all the WF components of the system, thereby avoiding mismatch problems. Full Article
ma A Framework to Evaluate Interface Suitability for a Given Scenario of Textual Information Retrieval By www.jucs.org Published On :: 2011-07-04T16:04:41+02:00 Visualization of search results is an essential step in the textual Information Retrieval (IR) process. Indeed, Information Retrieval Interfaces (IRIs) are used as a link between users and IR systems, a simple example being the ranked list proposed by common search engines. Due to the importance that takes visualization of search results, many interfaces have been proposed in the last decade (which can be textual, 2D or 3D IRIs). Two kinds of evaluation methods have been developed: (1) various evaluation methods of these interfaces were proposed aiming at validating ergonomic and cognitive aspects; (2) various evaluation methods were applied on information retrieval systems (IRS) aiming at measuring their effectiveness. However, as far as we know, these two kinds of evaluation methods are disjoint. Indeed, considering a given IRI associated to a given IRS, what happens if we associate this IRI to another IRS not having the same effectiveness. In this context, we propose an IRI evaluation framework aimed at evaluating the suitability of any IRI to different IR scenarios. First of all, we define the notion of IR scenario as a combination of features related to users, IR tasks and IR systems. We have implemented the framework through a specific evaluation platform that enables performing IRI evaluations and that helps end-users (e.g. IRS developers or IRI designers) in choosing the most suitable IRI for a specific IR scenario. Full Article
ma Improving Security Levels of IEEE802.16e Authentication by Involving Diffie-Hellman PKDS By www.jucs.org Published On :: 2011-07-04T16:04:45+02:00 Recently, IEEE 802.16 Worldwide Interoperability for Microwave Access (WiMAX for short) has provided us with low-cost, high efficiency and high bandwidth network services. However, as with the WiFi, the radio wave transmission also makes the WiMAX face the wireless transmission security problem. To solve this problem, the IEEE802.16Std during its development stage defines the Privacy Key Management (PKM for short) authentication process which offers a one-way authentication. However, using a one-way authentication, an SS may connect to a fake BS. Mutual authentication, like that developed for PKMv2, can avoid this problem. Therefore, in this paper, we propose an authentication key management approach, called Diffie-Hellman-PKDS-based authentication method (DiHam for short), which employs a secret door asymmetric one-way function, Public Key Distribution System (PKDS for short), to improve current security level of facility authentication between WiMAX's BS and SS. We further integrate the PKMv1 and the DiHam into a system, called PKM-DiHam (P-DiHam for short), in which the PKMv1 acts as the authentication process, and the DiHam is responsible for key management and delivery. By transmitting securely protected and well-defined parameters for SS and BS, the two stations can mutually authenticate each other. Messages including those conveying user data and authentication parameters can be then more securely delivered. Full Article
ma Hierarchical Graph-Grammar Model for Secure and Efficient Handwritten Signatures Classification By www.jucs.org Published On :: 2011-07-04T16:04:47+02:00 One important subject associated with personal authentication capabilities is the analysis of handwritten signatures. Among the many known techniques, algorithms based on linguistic formalisms are also possible. However, such techniques require a number of algorithms for intelligent image analysis to be applied, allowing the development of new solutions in the field of personal authentication and building modern security systems based on the advanced recognition of such patterns. The article presents the approach based on the usage of syntactic methods for the static analysis of handwritten signatures. The graph linguistic formalisms applied, such as the IE graph and ETPL(k) grammar, are characterised by considerable descriptive strength and a polynomial membership problem of the syntactic analysis. For the purposes of representing the analysed handwritten signatures, new hierarchical (two-layer) HIE graph structures based on IE graphs have been defined. The two-layer graph description makes it possible to take into consideration both local and global features of the signature. The usage of attributed graphs enables the storage of additional semantic information describing the properties of individual signature strokes. The verification and recognition of a signature consists in analysing the affiliation of its graph description to the language describing the specimen database. Initial assessments display a precision of the method at a average level of under 75%. Full Article
ma Semantic Web: Theory and Applicationsns By www.jucs.org Published On :: 2011-07-20T10:20:28+02:00 Full Article
ma Towards Classification of Web Ontologies for the Emerging Semantic Web By www.jucs.org Published On :: 2011-07-20T10:20:32+02:00 The massive growth in ontology development has opened new research challenges such as ontology management, search and retrieval for the entire semantic web community. These results in many recent developments, like OntoKhoj, Swoogle, OntoSearch2, that facilitate tasks user have to perform. These semantic web portals mainly treat ontologies as plain texts and use the traditional text classification algorithms for classifying ontologies in directories and assigning predefined labels rather than using the semantic knowledge hidden within the ontologies. These approaches suffer from many types of classification problems and lack of accuracy, especially in the case of overlapping ontologies that share common vocabularies. In this paper, we define an ontology classification problem and categorize it into many sub-problems. We present a new ontological methodology for the classification of web ontologies, which has been guided by the requirements of the emerging Semantic Web applications and by the lessons learnt from previous systems. The proposed framework, OntClassifire, is tested on 34 ontologies with a certain degree of overlapping domain, and effectiveness of the ontological mechanism is verified. It benefits the construction, maintenance or expansion of ontology directories on the semantic web that help to focus on the crawling and improving the quality of search for the software agents and people. We conclude that the use of a context specific knowledge hidden in the structure of ontologies gives more accurate results for the ontology classification. Full Article
ma A Semantic Wiki Based on Spatial Hypertext By www.jucs.org Published On :: 2011-07-20T10:20:34+02:00 Spatial Hypertext Wiki (ShyWiki) is a wiki which represents knowledge using notes that are spatially distributed in wiki pages and have visual characteristics such as colour, size, or font type. The use of spatial and visual characteristics in wikis is important to improve human comprehension, creation and organization of knowledge. Another important capability in wikis is to allow machines to process knowledge. Wikis that formally structure knowledge for this purpose are called semantic wikis. This paper describes how ShyWiki can make use of spatial hypertext in order to be a semantic wiki. ShyWiki can represent knowledge at different levels of formality. Users of ShyWiki can annotate the content and represent semantic relations without being experts of semantic web data description languages. The spatial hypertext features make it suitable for users to represent unstructured knowledge and implicit graphic relations among concepts. In addition, semantic web and spatial hypertext features are combined to represent structured knowledge. The semantic web features of ShyWiki improve navigation and publish the wiki knowledge as RDF resources, including the implicit relations that are analyzed using a spatial parser. Full Article
ma A Ranking Tool Exploiting Semantic Descriptions for the Comparison of EQF-based Qualifications By www.jucs.org Published On :: 2011-07-20T10:20:35+02:00 Nowadays, one of the main issues discussed at the Community level is represented by the mobility of students and workers across Europe. During the last years, in order to deal with the above picture, several initiatives have been carried out: one of them is the definition of the European Qualification Framework (EQF), a common architecture for the description of qualifications. At the same time, several research activities were established with the aim of finding how semantic technologies could be exploited for qualifications comparison in the field of human resources acquisition. In this paper, the EQF specifications are taken into account and they are applied in a practical scenario to develop a ranking algorithm for the comparison of qualifications expressed in terms of knowledge, skill and competence concepts, potentially aimed at supporting European employers during the recruiting phase. Full Article
ma Ontology-based Competency Management: the Case Study of the Mihajlo Pupin Institute By www.jucs.org Published On :: 2011-07-20T10:20:38+02:00 Semantic-based technologies have been steadily increasing their relevance in recent years in both the research world and business world. Considering this, the present article discusses the process of design and implementation of a competency management system in information and communication technologies domain utilizing the latest Semantic Web tools and technologies including D2RQ server, TopBraid Composer, OWL 2, SPARQL, SPARQL Rules and common human resources related public vocabularies. In particular, the paper discusses the process of building individual and enterprise competence models in a form of ontology database, as well as different ways of meaningful search and retrieval of expertise data on the Semantic Web. The ontological knowledge base aims at storing the extracted and integrated competences from structured, as well as unstructured sources. By using the illustrative case study of deployment of such a system in the Human Resources sector at the Mihajlo Pupin Institute, this paper shows an example of new approaches to data integration and information management. The proposed approach extends the functionalities of existing enterprise information systems and offers possibilities for development of future Internet services. This allows organizations to express their core competences and talents in a standardized, machine processable and understandable format, and hence, facilitates their integration in the European Research Area and beyond. Full Article
ma An Ontology based Agent Generation for Information Retrieval on Cloud Environment By www.jucs.org Published On :: 2011-07-20T10:35:16+02:00 Retrieving information or discovering knowledge from a well organized data center in general is requested to be familiar with its schema, structure, and architecture, which against the inherent concept and characteristics of cloud environment. An effective approach to retrieve desired information or to extract useful knowledge is an important issue in the emerging information/knowledge cloud. In this paper, we propose an ontology-based agent generation framework for information retrieval in a flexible, transparent, and easy way on cloud environment. While user submitting a flat-text based request for retrieving information on a cloud environment, the request will be automatically deduced by a Reasoning Agent (RA) based on predefined ontology and reasoning rule, and then be translated to a Mobile Information Retrieving Agent Description File (MIRADF) that is formatted in a proposed Mobile Agent Description Language (MADF). A generating agent, named MIRA-GA, is also implemented to generate a MIRA according to the MIRADF. We also design and implement a prototype to integrate these agents and show an interesting example to demonstrate the feasibility of the architecture. Full Article
ma Législatives : le cadeau de Jospin à Emmanuel Macron... By marc-vasseur.over-blog.com Published On :: Tue, 06 Jun 2017 20:37:05 +0200 On semble s'acheminer vers une vague En Marche de forte magnitude aux prochaines Législatives. Si Les Républicains espèrent limiter la casse, c'est à dire avec un perte d'une centaine de députés. L'enjeu de LR, c'est l'après avec en ligne de mire, un... Full Article
ma Sur la démission des maires et de la démocratie en général. By marc-vasseur.over-blog.com Published On :: Fri, 16 Nov 2018 16:55:29 +0100 Une étude récente dévoile que que près de la moitié des maires ne souhaitent pas se représenter lors des prochaines municipales de 2020. Incontestablement, le mandat de maire est probablement le plus difficile, il revêt tant d’aspects et ce quelque soit... Full Article
ma Winter is coming ou choisir entre la Souveraineté et le Marché By marc-vasseur.over-blog.com Published On :: Fri, 14 Dec 2018 12:43:00 +0100 Petite nouveauté : j'ouvre mon blog afin de faire vivre le débat et certains auront remarqué le changement d'intitulé. Désormais, Il y aura un nouveau contributeur en la personne de Vincent Grenier qui est entrepreneur et membre du Conseil National du... Full Article
ma On reparle des classes populaires, des péri-urbains, partis politiques... (billet de Mai 2011 ) By marc-vasseur.over-blog.com Published On :: Mon, 17 Dec 2018 14:03:16 +0100 L'actualité aidant, une personne sur twitter a exhumé un vieux billet que j'avais écrit en Mai 2011. A l'époque, je parlais du Parti Socialiste... sans fausse modestie, j'avais vu plutôt juste... Aujourd'hui, on peut déjà parler d'un autre "parti" mais... Full Article
ma Emmanuel Macron et les Réseaux Sociaux By marc-vasseur.over-blog.com Published On :: Tue, 19 Mar 2019 12:45:00 +0100 C’est entendu pour Emmanuel Macron, les violences auraient pour cause les Réseaux Sociaux. Je cite « il y a un changement anthropologique de nos sociétés qui vient des réseaux sociaux ». J’avoue que ce type de raisonnement aussi simplistes me laisse pantois.... Full Article
ma L’info du jour | 10 novembre - Matin By fr.euronews.com Published On :: Sun, 10 Nov 2024 07:00:03 +0100 L’info du jour | 10 novembre - Matin Full Article
ma Avec l'Arizona, Donald Trump a désormais remporté la totalité des sept États clés By fr.euronews.com Published On :: Sun, 10 Nov 2024 14:56:11 +0100 Avec l'Arizona, Donald Trump a désormais remporté la totalité des sept États clés Full Article